Threatpost for B2B

Syndicate content
The First Stop For Security News
Updated: 16 hours 28 min ago

Microsoft to Block Unwanted Adware July 1

Fri, 04/04/2014 - 14:11

Microsoft has announced this summer it will change the way it classifies adware by beginning to block unwanted and intrusive advertisements from users.

New objective criteria drafted up by the company stipulates that by July 1 internet ads must have a visible close button and must clearly state who’s behind them, or they’ll be branded as adware.

A blog post by Michael Johnson, a researcher at the company’s Malware Protection Center, described the changes in a blog entry Thursday afternoon.

According to Johnson advertisements must adhere to the following rules, or they “will be detected as adware and immediately removed from the user’s machine:”

Advertisements must:

  • Include an obvious way to close the ad.
  • Include the name of the program that created the ad.

Currently when Microsoft’s security products detect a program is operating suspiciously, the program is allowed to run, and the user is alerted and then given a recommended option to proceed. On July 1 when adware is found, Microsoft will stop the program entirely, notify the user and give them the option to restore it if they want.

Going forward users will also be given the option to uninstall whatever program is making the ads – providing of course the program has an uninstall entry in the Windows control panel.

The efforts are being implemented partly to better provide users with choice and control but also to give developers a three-month time period to ensure their programs comply with Microsoft’s new rules.

The approach reflects the company’s latest objective criteria update that defines how its antimalware products, products such as Security Essentials, Windows Defender, Safety Scanner, etc., will identify potentially unwanted software.

“We believe that it will make it easy for software developers to utilize advertising while at the same time empowering users to control their experience,” Johnson wrote of the new criteria yesterday.

Windows XP End-Of-Life Breeding Equal Parts FUD, Legit Concerns

Fri, 04/04/2014 - 12:13

For those of you anticipating the start of a Walking Dead-style malware apocalypse next Tuesday, calm yourselves.

The official end of security support for Windows XP is upon us, but it’s important to check some anxiety at the door and keep some perspective.

“I’ve been a forensics investigator 14 years and in my experience, I don’t know I’ve come across one incident, or very few anyway, where a vulnerability was exploited where an unpatched system wasn’t the source of a breach,” said Christopher Pogue, director at Trustwave. Pogue said breaches are much more likely to be blamed on poor passwords, weak access control systems or a poorly configured firewall and a glaring hole in the underlying operating system.

“All the administration stuff in place around these systems falls down. Attackers leverage that because they want the path of least resistance,” Pogue said. “You have to presume that before they get their exploit on an unpatched XP machine, they have to breach the environment, bypass firewalls get to the system, pivot to the unpatched system and hope it has critical data on it so they can run exploit code. There are a whole lot of items that have to line up for that to happen.”

The hype and hyperbole around April 8, the latest in a long line of security Doomsdays, is rooted in theories that because a good number of XP systems remain in use storing data and processing transactions, that any previously unreported XP vulnerabilities will be perpetual zero-days. The theory continues that attackers have been building and hording XP exploits, anxiously wringing their hands waiting for April 8, 2014 to come and go.

Now to dismiss all of that as FUD is foolhardy; some attackers who do have XP exploits that will be zero days in a matter of five days are going to wait. Others are less patient (see the recent XP Rich Text Format zero day that will be patched on Tuesday). And for those smaller organizations with fewer IT resources that may still be running XP machines that still hum along carrying out their mission day after day, their risk posture will be slouching a little more come Tuesday.

Big picture, however, people are moving off of XP. Qualys CTO Wolfgang Kandek published some numbers based off the company’s flagship vulnerability scanning service that indicate the XP installed base had dipped to below 15 percent, down from 35 percent 14 months ago. Migrations in the transportation and health care industries are much more dramatic, he said.

“These are two extremes, but all industries are showing a downward slope (migrating off XP); none are stagnant,” Kandek said.

Kandek is in the camp that attackers will intensify their targeting of XP machines and in particular will look at patches for modern Windows 7 and 8 systems and determine whether those vulnerabilities could be present in no-longer supported XP machines. He also urges organizations that must use XP to isolate those machines off the network, keep them for a specialized purpose and keep them offline.

“In May, Microsoft will publish bulletins and patches, and those can be taken by a hacker and reverse-engineered. They will ask ‘What does fix?’ And once they know what it does on Windows 7 or 8, that it changes a DLL or fixes an overflow, they could go into XP and figure out whether the same DLL exist or overflow vulnerability exists,” Kandek said. “Patches map to vulnerabilities that could be in XP. Sometimes they’re only in a new component of Windows 7, but most of the time you can find those vulnerabilities in XP.”

Kandek said that roughly 70 percent of vulnerabilities that were patched in 2013 were found in Windows 8 through XP.

“I don’t see why that would stop in May, June or July. Attackers can use that knowledge as pointer into XP to find if a vulnerability exists. It’s an accelerator for them. My feeling is that after two or three months, there will be tools in public that reliably exploit XP. I can definitely see how that would make an attacker’s work much easier.”

A key difference to point out, however, is that Windows 7 and 8, for example, are radically different under the hood than XP. Microsoft has invested time and money into building mitigations for a number of dangerous memory-based attacks. Technologies such as ASLR and DEP make it much more challenging and costly for an attacker to execute malicious code against vulnerabilities in the operating system. Looking for bugs in XP that live in Windows 7 or 8 just may not be the best use of resources for an attacker.

“An attacker has always chose the path of least resistance to gain access to a system; they don’t have to exploit the operating system, and for the most part, haven’t,” Trustwave’s Pogue said. “While it’s still possible, if I were a small business owner and running XP to store and process data, I’d be concerned about it and take steps run and updated and patched operating system. Even so, it’s important to remember that’s not a silver bullet. Updating to Windows 7 doesn’t mean you’re necessarily safe. You have to build up defense-in-depth mechanisms. XP has been updated and patched up to now, and I’ve investigated thousands of breaches on XP systems. An updated OS does not always equal security.”

Researchers Uncover Interesting Browser-Based Botnet

Fri, 04/04/2014 - 10:42

Security researchers discovered an odd DDoS attack against several sites recently that relied on a persistent cross-site scripting vulnerability in a major video Web site and hijacked users’ browsers in order to flood the site with traffic.

The attack on the unnamed site involved the use of injected Javascript on the site which would execute in a user’s browser whenever he views a profile image that contains the Javascript. Once the code runs, it then fires off an embedded iframe with a DDoS tool that sends a GET request to the target sites. The attacker embedded the malicious code in his own profile image on the video site, and then posted a comment on hundreds of videos so that his profile image appears next to the comment.

As more and more visitors watched the videos, and therefore viewed the malicious image, the GET requests continues to mount for the targeted sites.

“As a result, each time a legitimate visitor landed on that page, his browser automatically executed the injected JavaScript, which in turn injected a hidden <iframe> with the address of the DDoSer’s C&C domain. There, an Ajax-scripted DDoS tool hijacked the browser, forcing it to issue a DDoS request at a rate of one request per second,” Ronen Atia of Incapsula, the security company that discovered the attack, wrote in an analysis.

“Obviously one request per second is not a lot. However, when dealing with video content of 10, 20 and 30 minutes in length, and with thousands of views every minute, the attack can quickly become very large and extremely dangerous. Knowing this, the offender strategically posted comments on popular videos, effectively created a self-sustaining botnet comprising tens of thousands of hijacked browsers, operated by unsuspecting human visitors who were only there to watch a few funny cat videos.”

The company was able to intercept the malicious requests going to the target sites and traced it back to the compromised video site, which Incapsula is not naming yet. The researchers then inserted a piece of their own Javascript into the requests, replacing the target URL. They then were able to figure out the persistent XSS vulnerability and alerted the owners of the compromised site.

Despite that success, Atia said that the attacker behind the DDoS has replaced the original tool he was using with a more sophisticated version.

“This leads us to believe that what we saw yesterday was a sort of POC test run. The current code is not only much more sophisticated, but it is also built for keeping track of the attack, for what seems like billing purposes. From the looks of it, someone is now using this Alexa Top 50 website to set up a chain of botnets for hire,” he said.

That attack Incapsula uncovered shares some characteristics with some research that Jeremiah Grossman and Matt Johansen of WhiteHat Security presented at Black Hat last year. In their example, an attacker could inject malicious Javascript into ads that are distributed via an ad network and force the user’s browsers to perform an operation, whether it’s launching a DDoS attack on a target server or something else.

Facebook Bug Bounty Submissions Dramatically Increase

Thu, 04/03/2014 - 15:00

Facebook today reported a dramatic increase in 2013 submissions to its bug bounty program, and said that despite reports from researchers that it’s becoming difficult to find severe bugs on its various properties, the social network plans to increase rewards for critical bugs.

“The volume of high-severity issues is down, and we’re hearing from researchers that it’s tougher to find good bugs,” Facebook security engineer Collin Greene said. “To encourage the best research in the most valuable areas, we’re going to continue increasing our reward amounts for high priority issues.”

Greene said Facebook paid out $1.5 million in bounties last year, rewarding more than 330 researchers at an average payout of $2,204. Submissions, however, skyrocketed 246 percent over 2012 to 14,763, he said. Most of those, however, were not eligible for a bounty; only six percent were rated high severity. Greene said that Facebook has been able to cut its response time for critical vulnerabilities down to six hours. Facebook also released geographic stats on its bug submissions, revealing that researchers in India contributed the largest number of valid bugs (136), while researchers in Russia earned on average more than anyone from the program, $3,961 (38 bugs). U.S.-based researchers, meanwhile, reported 92 bugs and were rewarded on average $2,272.

“Most submissions end up not being valid issues, but we assume they are until we’ve fully evaluated the report,” Greene said. “That attitude makes it possible for us to triage high-priority issues quickly and get the right resources allocated immediately.”

Most leading technology providers have some sort of vulnerability rewards program. Most, including Google, Yahoo, Github and others reward researchers for finding vulnerabilities in Web-based applications and services. Microsoft, however, is an outlier, paying significant rewards for bypasses of mitigations built into Windows and other Microsoft products.

These companies are in a constant tug of war with vulnerability brokers, exploit vendors and the black market, most of whom pay more for bugs than vendors. Microsoft, for example, has tried to narrow the gap with a $100,000 rewards for mitigation bypasses, but even a low six-figure payout may pale in comparison to what a less than scrupulous researcher could earn on the underground, for example.

Other legitimate programs such as HP’s Zero-Day Initiative offer six-figure paydays at events such as the Pwn2Own contest held in conjunction with the annual CanSecWest conference. This year’s contest paid out $850,000 with French exploit vendor VUPEN cashing in with close to a half-million dollars in prizes.

Facebook’s biggest payout was made in January to Brazilian engineer Reginaldo Silva who earned $33,500 for what Facebook called an XML External Entities Attack. The vulnerability could allow an attacker to read files from a Facebook server to another internal service and execute code. The bug caused Facebook to disable external entities across and audit the code for similar endpoints, Greene said.

“One of the most encouraging trends we’ve observed is that repeat submitters usually improve over time,” Greene said. “It’s not uncommon for a researcher who has submitted non-security or low-severity issues to later find valuable bugs that lead to higher rewards.”

To that end, Green said Facebook is giving researchers a new support dashboard where they can view the status of submissions. Also, the bug bounty has  now been extended to Facebook acquisitions Instagram, parse, Atlas and Onavo.

Microsoft to Fix Word Zero Day with Final XP Patch

Thu, 04/03/2014 - 14:51

P { margin-bottom: 0.08in; }A:link { }
-->In just five days, Microsoft will send off two critical and two important rated security bulletins in what will be the very last Patch Tuesday release providing support for the Redmond, Washington computer company’s ancient and always-vulnerable XP operating system.

The critically rated bulletins will address remote code execution vulnerabilities in Microsoft Office, Office Services, and Office Web Apps as well as bugs in Windows and Internet explorer. The important rated bulletins will close off holes in Windows and Office.

Of course, the first bulletin will resolve a Microsoft Word zero day. The company issued a special security advisory and produced a Fix-it solution after it spotted targeted attacks exploiting the zero day in the wild late last month. The patch warrants highest priority despite the fact that observed attacks required hackers to perform a complicated chain of exploits.

“This is a critical vulnerability that could allow remote code execution if a user opens a RTF file in Word 2010 or in Outlook while using Word as the email viewer,” explained Russ Ernst, director product management at Lumension, in an email interview. “Known to be under active attack, a hacker using this vulnerability could gain user rights.”

The second bulletin, Ernst explained, is a cumulative update for Internet Explorer, which is also critically rated and of high priority for the many IE users on the Web.

“If pushing patches for these new vulnerabilities while working a migration plan for XP and Office 2003 users weren’t enough,” Ernst continued, “administrators are still dealing with the fallout from the recent Pwn2Own competition, which revealed vulnerabilities in all of the major browsers and in Adobe’s Flash Player plug-in.”

To drive home that point, IT will indeed have their hands full with this and Pwn2Own fixes from Mozilla and Chrome and a recent patch for Safari from Apple as well.

Wolfgang Kandek from Qualys noted in an Interview with Threatpost that this light month of patches is in-step with what has been a light overall year for patches. Thus far, Microsoft has issued just 20 bulletins compared to 36 last year and 28 in 2012.

“That number is lower than where we’re at normally, and I don’t know why,” Kandek admited. “I think people are submitting fewer vulnerabilities to Microsoft; that’s the only explanation I can come up with at the moment. There’s no reason we’re seeing fewer vulnerabilities and I don’t think there’s less research going on. There is no shortage of people who look for bugs, maybe there is a shortage of people who do it for free.”

Kandek’s observation regarding less bug submission is simultaneously sensible and puzzling. On the one hand, Microsoft has been consistently sweetening the pot for security researchers that disclose bugs for the last year or so. On the other hand, exploit brokers like Vupen and other hacking teams are cashing in at hacking contest like Pwn2Own – where the payouts are bigger than ever – rather than submitting directly to Microsoft.

Regulators To US Banks: Be Vigilant of ATM Fraud, DDoS

Thu, 04/03/2014 - 14:46

U.S. regulators are warning banks this week about a recent rash of “large dollar value” ATM fraud and the ongoing risks distributed denial of service (DDoS) attacks that target public bank websites can pose.

Members of FFIEC, the Federal Financial Institutions Examination Council, an interagency sect of the U.S. government responsible for preparing banking standards and principles, issued the warnings in a statement yesterday.

FFIEC claims attackers have been able to gain access to and alter the settings on web-based ATM control panels belonging to small to medium sized institutions.  The campaign, nicknamed “Unlimited Operations” by the U.S. Secret Service, is allowing attackers to withdraw money beyond controlled limits on ATMs, oftentimes more than the victim’s cash balance.

FFIEC’s warning describes how exactly the control panels figure into the ATMs:

“These control panels, often web-based, manage the amount of money customers may withdraw within a set time frame, the geographic limitations of withdrawals, the types and frequency of fraud reports that its service provider sends to the financial institutions, the designated employee that receives these reports, and other management functions related to card security and internal controls,”

Officials are claiming hackers used phishing attacks to secure legitimate employee log-ins to tweak these settings to carry out their attacks, including one that netted them $40 million with 12 debit card accounts.

FFIEC also used the announcement as an opportunity to remind banks about the continued sophistication surrounding DDoS attacks – pointing out a string of attacks that affected institutions in 2012 and warning that they can be used as a “diversionary tactic,” granting hackers the time to root around systems.

Naturally, FFIEC is encouraging banks to mitigate further risk by following standards already in place such as PCI-DSS and HSM when it comes to encrypting PINs.

The agency is also encouraging banks if they haven’t already, to formulate some sort of DDoS readiness plan with a program that prioritizes and assesses risks in its critical systems.

“The members expect financial institutions to take steps to address this threat by reviewing the adequacy of their controls over their information technology networks,” the joint statement reads.

We first learned about “Unlimited Operations” last spring after eight members of the cybercrime ring were indicted in Brooklyn. Associates in at least 26 countries helped the crew cash out fake credit cards at 140 different ATMs to the tune of $45 million – $2.8 in NYC – in just shy of 24 hours.

According to a federal indictment unsealed last year the money was later spent on kickbacks such as luxury cars and Rolex watches.

Cyberespionage, Not Cyber Terror, is the Major Threat, Former NSA Director Says

Thu, 04/03/2014 - 10:40

CHANTILLY, VA–The list of threats on the Internet is long and getting longer each day. Cybercrime, nation-state attackers, cyber espionage and hacktivists all threaten the security and stability of the network and its users in one way or another. But the one threat that some experts have warned about for years and has never emerged is cyber terrorism, a former top U.S. intelligence official said.

In the years after 9/11, as the Internet became an integral part of daily life in much of the world, some in the national security community warned that the network also would become a key conduit for terrorist attacks against a variety of targets. Utilities, critical infrastructure, banks and other vital pieces of the global economy would be choice targets for groups seeking to wreak havoc via electronic attacks. However, those attacks have not materialized.

“I don’t have a single example of cyber terrorism. Not one incident,” Michael Hayden, the former director of the CIA and NSA, said during a keynote speech at the Systems Engineering DC conference here Thursday.

“They use the Web to recruit and to proselytize, but they don’t use the Web to attack.”

Cyber terrorism, much like its close relation cyberwar, have become loaded terms in the security and intelligence communities. There are any number of definitions floating around for each of them, and none seems to have become authoritative. But attacks such as Stuxnet and Flame have been touted in some circles as examples of cyberwar, while others dispute this notion. And there’s often quite a bit of overlap between cyber terrorism, typical cybercrime and other attacks in discussions about the topic.

But the use of the Internet by traditional terrorist groups for attacks against physical assets–or to disrupt the Internet itself–is not something that’s going on right now, Hayden said.

“They’re into mass destruction and not mass disruption. Maybe they don’t want to disrupt the platform they’re using,” he said. “If they ever downshift to mass disruption, it could be very troubling.”

Hayden, who now works for the Chertoff Group, said that the threat landscape today is growing more complex every day, and that cybercrime, hacktivism, nation-state attackers and other elements all play a part in this. Of the malicious activities that pervade the Internet today, Hayden said that perhaps the largest threat is cyber espionage. Governments using electronic means to conduct corporate espionage or even traditional espionage remotely has become a sensitive topic in diplomatic circles, especially in light of the Snowden revelations about the NSA’s activities.

“The overwhelming majority is people going where they’re not invited and taking stuff they’re not entitled to,” Hayden said.

He emphasized that the U.S. intelligence community is very good at its job, which to a large degree involves taking other people’s stuff, but said the CIA and NSA don’t do so on behalf of American corporate interests. That, he said, is an important distinction.

“I know a fair bit about stealing stuff in the cyber domain. We’re really good at it, and we do it to keep you safe,” he said.

Tool Estimates Incident Response Cost for Businesses

Thu, 04/03/2014 - 10:31

P { margin-bottom: 0.08in; }
-->A thorough and freely available tool aims to help security professionals and executives anonymously tabulate the costs incurred on enterprises following all manner of cyber-incidents.

Called CyberTab, the tool was created by The Economist Intelligence Unit and sponsored by the consulting firm Booz Allen Hamilton. While the tool is free, it gives users the choice of opting in to allowing their reports to be used as part of a study undertaken by The Economist.

Based on input estimates of incident response and business expenses, as well as those of lost sales and customers, CyberTab calculates the cost of a specific cyber attack and estimates the return on investment for preventative measures.

It has two modes, a planning mode, which estimates the cost of a potential attack to help organizations better understand the risks they face and their security investment choices, and a reporting mode, which examines and reports the cost of a specific attack that has already occurred based on a long list of factors.

Each tool will ask users to identify the type of attack deployed against them. The options include denial of service attacks, malware infections, misuse of systems by employees or partners, intrusions with no data theft, intrusions with personal data theft, and intrusions with intellectual property data theft.

The tools also inquire – again anonymously – about the size of an affected enterprise, the industry and region in which they operate, the duration and time frame of a specific attack, when and by whom was the attack discovered, who carried out the attack, and what sorts of tactics and technologies were deployed by the attackers.

Beyond that, the tools take into account the types of systems and number of servers and endpoints affected by the incident. In the case of DDoS attacks, the tools ask about the peak bandwidth in gigabits per second. The tool further takes into account the company data and types of accounts implicated in the attack. The impact on intellectual property and number of parties affected – employees, consumer and business customers, and partners – as well.

Outside the details of the attack, the tools also seek out specific cost details. How many incident response workers does the company employ? Which and how many technology measures it eh business invested in? Did the organization seek outside help following the incident? Were there legal or customer service and support costs incurred in the incident.

It offers a straightforward user interface and allows users to stop and save their progress at any time.

In the end, the CyberTab tool takes all these and more factors into account and estimates the total potential cost – in ranges – paid by an affected organization and the amount of money that they could save – for each dollar spent – by deploying preventative measures.

 

Yahoo Encrypts Data Center Links, Boosts Other Services

Thu, 04/03/2014 - 10:26

Yahoo certainly has taken its share of knocks during the past nine months of surveillance revelations and Snowden leaks for its encryption shortcomings. But the bruises are healing and the company is slowly working its way back into good graces.

After months of being an encryption laggard, Yahoo gained on the field with a number of enhancements announced last night by new chief information security officer Alex Stamos.

Chief among the improvements is that as of Monday, traffic moving between Yahoo data centers is encrypted. This, along with a lack of email encryption, was an area critics were especially harsh on Yahoo after top secret documents revealed the National Security Agency was able to sniff communications between Yahoo and Google data centers.  The Washington Post reported at the time that a combined initiative between the NSA and Britain’s GCHQ called MUSCULAR allowed the intelligence agencies to copy data from the company’s fiber-optic cables outside the U.S. Google, meanwhile, announced in November it had turned encryption on between its data centers.

“In light of reports that governments have directly tapped Internet backbones to obtain secret access to millions of people’s private communications, it’s become clear that routine use of encryption is an important basic measure for privacy and security online,” said Seth Schoen, senior staff technologist at the Electronic Frontier Foundation. “Without it, any network operator (from the smallest Wi-Fi node to the largest Internet backbone companies), or anyone who can coerce or infiltrate one, can easily see the intimate details of what people are saying online.”

As for email, Yahoo was one of the last major web-based email providers to turn on SSL by default, doing so in January after an initial foray in November when users were given the option to turn it on manually. Stamos said yesterday that in the last month, Yahoo turned on encryption of its email service between Yahoo’s servers and other email providers who support the SMTPTLS standard.

Yahoo has also turned on HTTPS encryption on its home page, search queries that run on the home page and most of its properties. Yahoo supports TLS 1.2, Perfect Forward Secrecy and 2048-bit RSA encryption for its home page, mail and digital magazines, Stamos said. He added that users can initiate encrypted sessions for Yahoo News, Sports, Finance and Good Morning America on Yahoo by typing HTTPS in the URL. He also promised an encrypted version of Yahoo Messenger in the coming months.

“Our goal is to encrypt our entire platform for all users at all time, by default,” Stamos said.

Also on the road map, Stamos said, Yahoo plans to implement HSTS, Perfect Forward Secrecy and Certificate Transparency in the near future.

“One of our biggest areas of focus in the coming months is to work with and encourage thousands of our partners across all of Yahoo’s hundreds of global properties to make sure that any data that is running on our network is secure,” Stamos said. “Our broader mission is to not only make Yahoo secure, but improve the security of the overall web ecosystem.”

Forward secrecy has long been advocated by security and privacy experts as an important failsafe to secure data and communications. The technology keeps the content of old encrypted connections private even if the encryption key is lost or stolen in the future.

Yahoo was criticized heavily for its lack of encryption on its services, which experts said facilitated the NSA’s ability to snoop on traffic, and harmed users’ ability to keep their identities and personal information secure from criminals operating on the web. While it doesn’t stop the government or law enforcement from obtaining user data via court orders or warrants, it does hamper their efforts to hack into servers and communication lines.

Meanwhile, the EFF’s Encrypt the Web report, which it continues to update, demonstrated Yahoo’s glaring encryption weaknesses in the wake of the initial Snowden leaks. Since then, most of the technology companies surveyed have tightened up their encryption practices, leaving only carriers such as Verizon, Comcast and AT&T in the rear.

“We commend Yahoo for taking these steps, and hope today’s announcements will continue to foster a recognition that encryption is an industry standard,” the EFF’s Shoen said.

DNS-Based Amplification Attacks Key on Home Routers

Wed, 04/02/2014 - 15:51

DNS providers Nominum have published new data on DNS-based DDoS amplification attacks that are using home and small office routers as a jumping off point.

The provider said that in February alone, more than five million home routers were used to generate attack traffic; that number represents more than one-fifth of the 24 million routers online that have open DNS proxies.

The impact hits Internet service providers (ISPs) especially hard because amplification attacks not only consume bandwidth, but also drive up support costs and impact customer confidence in their ISP, Nominum said.

“Existing in-place DDoS defenses do not work against today’s amplification attacks, which can be launched by any criminal who wants to achieve maximum damage with minimum effort,” said Sanjay Kapoor, CMO and SVP of Strategy, Nominum. “Even if ISPs employ best practices to protect their networks, they can still become victims, thanks to the inherent vulnerability in open DNS proxies.”

Craig Young, senior security researcher with Tripwire, said the problem can largely be traced to weak default configurations on the home and SOHO routers.

“They shouldn’t have open DNS resolvers on the Net,” Young said. “Routers are designed so that someone inside the network can send a DNS request to the router, which passes that on to the ISP, which sends the request back to you inside the network. That’s fine and proper. What’s not fine is when someone else can send a message to an external interface and have the router send that to the ISP.”

Outsiders can take advantage of these open resolvers, spoof traffic and amplify the size of the request coming back. With a botnet, for example, this can quickly escalate and cause a denial-of-service condition against large organizations that criminals can find particularly effective in extortion schemes or hacktivism.

“DDoS has always relied on address spoofing so anything can be targeted and traffic cannot be traced to its origin; but as with any exploit, attackers continuously refine their tactics,” Nominum said in its report. “The new and dangerous DNS DDoS innovation has emerged, where attackers exploit a backdoor into provider networks: tens of millions of open DNS proxies scattered across the Internet. A few thousand can create Gigabits of unwanted traffic.”

In the past 18 months, the volume of bad traffic used in DDoS attacks has skyrocketed to unprecedented levels. A year ago, 300 Gbps DDoS attacks launched against Spamhaus reached 300 Gbps, causing the blacklist service to drop offline for periods of time. Earlier this year, that threshold was surpassed when traffic optimization firm CloudFlare reported it had fought back a 400 Gbps DDoS attack for one of its European customers. The attackers took advantage of a weakness in the Network Time Protocol (NTP) to amplify the volume of that attack, while in the Spamhaus attack, the attackers took advantage of open DNS resolvers.

Nominum said ISPs can resolve the spoofing issue, in particular with regard to home routers.

“Solving the open resolver problem is straightforward: configure production resolvers properly (restrict access to IP ranges controlled by the server operator) and seek out long forgotten and malicious servers and shut them down,” Nominum said. “This is not to suggest it’s a trivial undertaking, this advice has been around a long time and the problem persists.”

Tripwire’s Young said ISPs could also filter against reputation lists which share attack information among providers to recognize DNS requests for domains that are part of an attack. Those packets could then be dropped.

“It’s not hard to have a DDoS-specific system and recognize abnormal patterns, apply rate-limiting, and drop traffic,” Young said.

Amazon Web Services Combing Third Parties for Exposed Credentials

Wed, 04/02/2014 - 15:01

Amazon Web Services is actively searching a number of sources, including code repositories and application stores, looking for exposed credentials that could put users’ accounts and services at risk.

A week ago, a security consultant in Australia said that as many as 10,000 secret Amazon Web Services keys could be found on Github through a simple search. And yesterday, a software developer reported receiving a notice from Amazon that his credentials were discovered on Google Play in an Android application he had built.

Raj Bala printed a copy of the notice he received from Amazon pointing out that the app was not built in line with Amazon’s recommended best practices because he had embedded his AWS Key ID (AKID) and AWS Secret Key in the app.

“This exposure of your AWS credentials within a publicly available Android application could lead to unauthorized use of AWS services, associated excessive charges for your AWS account, and potentially unauthorized access to your data or the data of your application’s users,” Amazon told Baj.

Amazon advises users who have inadvertently exposed their credentials to invalidate them and never distribute long-term AWS keys with an app. Instead, Amazon recommends requesting temporary security credentials.

Rich Mogull, founder of consultancy Securosis, said this is a big deal.

“Amazon is being proactive and scanning common sources of account credentials, and then notifying customers,” Mogull said. “They don’t have to do this, especially since it potentially reduces their income.”

Mogull knows of what he speaks. Not long ago, he received a similar notice from Amazon regarding his AWS account, only his warning was a bit more dire—his credentials had been exposed on Gitbub and someone had fired up unauthorized EC2 instances in his account.

Mogull wrote an extensive description of the incident on the Securosis blog explaining how he was building a proof-of-concept for a conference presentation, storing it on Github, and was done in because a test file he was using against blocks of code contained his Access Key and Secret Key in a comment line.

Turns out someone was using the additional 10 EC2 instances to do some Bitcoin mining and the incident cost Mogull $500 in accumulated charges.

Amazon told an Australian publication that it will continue its efforts to seek out these exposed credentials on third-party sites such as Google Play and Github.

“To help protect our customers, we operate continuous fraud monitoring processes and alert customers if we find unusual activity,” iTnews quoted Amazon.

Said Mogull: “It isn’t often we see a service provider protecting their customers from error by extending security beyond the provider’s service itself. Very cool.”

Researchers Divulge 30 Oracle Java Cloud Service Bugs

Wed, 04/02/2014 - 13:26

Upset with the vulnerability handling process at Oracle, researchers yesterday disclosed more than two dozen outstanding issues with the company’s Java Cloud Service platform.

Researchers at Security Explorations published two reports, complete with proof of concept codes, explaining 30 different vulnerabilities in the platform, including implementation and configuration weaknesses, problems that could let users access other users’ applications, and an issue that could leave the service open to a remote code execution attack.

The Polish firm released the information after Oracle apparently failed to produce a monthly status report, a document that usually surfaces around the 24th of each month, for the reported vulnerabilities in March.

Adam Gowdiak, the company’s founder and CEO believes that Oracle is on the fence regarding the way it handles its cloud vulnerability handling policies.

“The company openly admits it cannot promise whether it will be communicating resolution of security vulnerabilities affecting their cloud data centers in the future,” Gowdiak said in an open letter posted to Security Explorations’ site on Tuesday.

Researchers dug up the following bugs in both US1 and EMEA1 versions of Oracle Java Cloud data centers.

  • The first block of issues, 1-16, stem from an insecure implementation of the perpetually fickle Java Reflection API in the service’s chief server, WebLogic. If exploited the vulnerabilities could lead to a full compromise of the Java security sandbox.
  • The second batch of vulnerabilities, issues 17-20, ties into a problem with the platform’s whitelisting functionality, which can also be bypassed thanks to the Java Reflection API.
  • Issue 21 revolves around shared WebLogic administrator credentials. Usernames and passwords, which are usually encrypted, can be decrypted with a “standard API,” and are also present across the platform.
  • Issue 22 pertains to the insecurity of the platform’s Policy Store. Sensitive usernames and passwords – often times those belonging to users with admin privileges – are exposed in plaintext form.
  • Issue 23 exposes several WebLogic applications to the public internet. These internal applications are usually only accessible by authenticated Oracle Access Managers (OAM) but a problem the platform could put them at risk.
  • Issue 24 is a Directory Traversal Vulnerability that could let anyone access files that wouldn’t otherwise be deployed on WebLogic from a public internet.
  • Issue 25 exploits a year-old version of Java SE, a problem that opens the platform up to even more vulnerabilities, since all of the fixes from the tail end of 2012 and 2013 have not been applied yet.
  • The 26th issue also involves an authentication bypass, this time via the T3 protocol. While it sounds a little more complicated to exploit, Security Explorations researchers discovered it’s possible to send a “a specially crafted object instance to a remote server identified by a given object identifier (OID) value and successfully impersonate the WebLogic kernelIdentity.”
  • Issue 27 makes it possible to tunnel T3 protocol requests through Oracle’s HTTP Server (OHS) to mimic HTTPS requests.
  • Issue 28 also deals with T3 protocol messages, as they relate to an out of bounds vulnerability with chunk data.

Researchers argue a remote code execution attack would be quite easy to pull off if an attacker combined several of the aforementioned vulnerabilities.

“As a result of the combination of the implementation and configuration flaws outlined… arbitrary code execution access could be gained on a WebLogic server instance hosting Java Cloud services of other users from the same regional data center,” the report, which gets much more in depth regarding attack vectors, reads.

Essentially the attack would involve having a custom .JSP (JavaServer Page) file uploaded to a target WebLogic server, which could later be called upon to trigger the execution of Java code embedded in it.

Security Explorations initially got in touch with Oracle about the preceding vulnerabilities (.PDF) in late January but while it waiting on Oracle’s response, managed to find two additional issues.

Those bugs, 29 and 30 (.PDF), like several of the other 28, involve the service’s whitelisting implementation and can ultimately lead to its API being bypassed.

Oracle’s next batch of updates is set to be bundled together in its quarterly Critical Patch Update on April 15 although it’s unclear if the vulnerabilities from Java Cloud Service, a service the company introduced in 2012 to assist businesses in managing data and building database applications across the cloud will be addressed.

Matthew Green on the NSA and Crypto Backdoors

Wed, 04/02/2014 - 11:38

Dennis Fisher talks with Matthew Green of Johns Hopkins University about the paper he co-authored on the Extended Random extension for Dual EC DRBG and whether it could be considered a backdoor.

http://threatpost.com/files/2014/04/digital_underground_149.mp3

Download: digital_underground_149.mp3

Apple Fixes More Than 25 Flaws in Safari

Wed, 04/02/2014 - 07:20

Apple has updated its Safari browser, dropping a pile of security fixes that patch more than 25 vulnerabilities in the WebKit framework.

Many of the vulnerabilities Apple repaired in Safari can lead to remote code execution, depending upon the attack vector. There are a number of use-after-free vulnerabilities fixed in WebKit, along with some buffer overflows and other memory corruption issues. One of the vulnerabilities, CVE-2014-1289, for example, allows remote code execution.

“WebKit, as used in Apple iOS before 7.1 and Apple TV before 6.1, allows remote attackers to execute arbitrary code or cause a denial of service (memory corruption and application crash) via a crafted web site,” the vulnerability description says.

That flaw was fixed in iOS and other products earlier this year but Apple just released the fix for Safari on Monday. Along with the 25 memory corruption vulnerabilities the company fixed, it also pushed out a patch for a separate issue in Safari that could enable an attacker to read arbitrary files on a user’s machine.

“An attacker running arbitrary code in the WebProcess may be able to read arbitrary files despite sandbox restrictions. A logic issue existed in the handling of IPC messages from the WebProcess. This issue was addressed through additional validation of IPC messages,” the Apple advisory says.

More than half of the WebKit flaws fixed in Safari 6.1.3 and 7.0.3 were discovered by the Google security team, which isn’t unusual. Google Chrome uses the WebKit framework, too, and the company’s security team is constantly looking for new vulnerabilities in it.

LinkedIn Goes After Email-Scraping Browser Plug-In

Tue, 04/01/2014 - 14:54

UPDATE: The makers of the controversial Sell Hack browser plug-in responded this afternoon to a cease-and-desist order from LinkedIn and confirmed their extension no longer works on LinkedIn pages and that all of the publicly visible data it had processed from LinkedIn profiles has been deleted.

LinkedIn has sent a cease-and-desist letter Monday night to Sell Hack, a JavaScript-based browser plug-in that scrapes email addresses associated with social media profiles from the web. The company markets that data to sales and marketing professionals.

“We’ve been described as sneaky, nefarious, no good, not ‘legitimate’ amongst other references by some,” the Sell Hack team said. “We’re not. We’re dads from the Midwest who like to build web and mobile products that people use.”

LinkedIn said none of its member data was put at risk by the two-month-old Sell Hack’s plug-in.

According to the Sell Hack website, once the browser extension is installed and a user browses to a social media profile page, a “Hack In” button is visible that will search the web for email addresses that could be associated with a particular profile.

According to another post on the Sell Hack blog: “The magic happens when you click the ‘Hack In’ button. You’ll notice the page slides down and our system starts checking publicly available data sources to return a confirmation of the person’s email address, or our best guesses.”

LinkedIn’s legal team reached out to Sell Hack with its cease-and-desist last night.

“We are doing everything we can to shut Sell Hack down,” said a LinkedIn spokesperson. “Yesterday LinkedIn’s legal team delivered Sell Hack a cease and desist letter as a result of several violations. LinkedIn members who downloaded Sell Hack should uninstall it immediately and contact Sell Hack requesting that their data be deleted.”

While the issue may not be a security vulnerability, since the Snowden leaks began, technology providers are ultra-sensitive about maintaining the privacy of their users’ data, which in this case is being collected and sold without consent.

“We advise LinkedIn members to protect themselves and to use caution before downloading any third-party extension or app,” LinkedIn said. “Often times, as with the Sell Hack case, extensions can upload your private LinkedIn information without your explicit consent.”

LinkedIn is one of a handful of major technology providers who lobbied hard against the government for additional transparency in reporting government requests for user data. Many of those same companies were initially accused of providing the government direct access to servers in order to obtain user data.

Unlike other providers such as Google or Facebook, LinkedIn does not offer Web-based email or storage. Instead, its appeal to the intelligence community was its mapping of connections between its hundreds of millions of members.

LinkedIn called the transparency ban unconstitutional in September; the technology companies eventually won out in January when the Justice Department agreed to ease a gag order that prevented the companies from reporting on national-security-related data requests.

This article was updated on April 1 with additional comments from LinkedIn and the Sell Hack team.

Clapper: NSA Has Searched Databases for Information on U.S. Persons

Tue, 04/01/2014 - 14:18

UPDATE–The NSA searches the data it collects incidentally on Americans, including phone calls and emails, during the course of terrorism investigations. James Clapper, the director of national intelligence, confirmed the searches in a letter to Sen. Ron Wyden, the first time that such actions have been confirmed publicly by U.S. intelligence officials.

Clapper, the head of all U.S. intelligence agencies, said in the letter that the NSA, which is tasked with collecting intelligence on foreign nationals, has searched the data that is has collected on Americans as part of its collection of foreign intelligence. The agency collects some Americans’ data, such as phone calls and emails, in the course of collecting the communications of foreign targets. But it has been unclear until now whether the NSA in fact searches those databases specifically for information on U.S. citizens.

The agency collects some Americans’ data, such as phone calls and emails, in the course of collecting the communications of foreign targets.

Clapper made it clear in his letter that it does.

“As reflected in the August 2013 Semiannual Assessment of Compliance with Procedures and Guidelines Issued Pursuant to Section 702. which we declassified and released on August 21, 2013, there have been queries, using U.S. person identifiers. of communications lawfully acquired to obtain foreign intelligence by targeting non U.S. persons reasonably believed to be located outside the U.S. pursuant to Section 702 of FISA,” Clapper said in a letter sent March 28 to Wyden (D-Ore.).

Wyden, a member of the Senate Intelligence Committee, has been a frequent critic of the NSA and its collection methods in recent years. During a hearing in January, Wyden asked whether the NSA ever had performed queries against its databases looking for information on U.S. citizens. Clapper’s letter was meant as an answer to the question. He did not say in the letter how many such searches the NSA had performed.

Responding to Clapper’s letter Wyden and Sen. Mark Udall (D-Colo.) isued a statement, saying that the DNI’s revelations show that the NSA has been taking advantage of a loophole in the existing law.

“It is now clear to the public that the list of ongoing intrusive surveillance practices by the NSA includes not only bulk collection of Americans’ phone records, but also warrantless searches of the content of Americans’ personal communications,”* Wyden and Udall said. ”This is unacceptable. It raises serious constitutional questions, and poses a real threat to the privacy rights of law-abiding Americans. If a government agency thinks that a particular American is engaged in terrorism or espionage, the Fourth Amendment requires that the government secure a warrant or emergency authorization before monitoring his or her communications. This fact should be beyond dispute.

“ Senior officials have sometimes suggested that government agencies do not deliberately read Americans’ emails, monitor their online activity or listen to their phone calls without a warrant. However, the facts show that those suggestions were misleading, and that intelligence agencies have indeed conducted warrantless searches for Americans’ communications using the ‘back-door search’ loophole in section 702 of the Foreign Intelligence Surveillance Act.”

Section 702 of the Foreign Intelligence Surveillance Act is the measure that governs the way that the NSA can target foreigners for intelligence collection and spells out the methods it must use to ensure that data on Americans or other so-called “U.S. persons” are not collected. The NSA also must take pains to minimize the amount of information it gathers that isn’t relevant to a foreigner who is being targeted.

Clapper said in his letter that the NSA has followed the minimization procedures when it does query its databases on information related to U.S. persons. He also said that Congress had the chance to do away with the agency’s ability to run such queries, and didn’t.

“As you know, when Congress reauthorized Section 702, the proposal to restrict such queries was specifically raised and ultimately not adopted,” the letter says.

This story was updated on April 2 to include the statement from Wyden and Udall.

DVR Infected with Bitcoin Mining Malware

Tue, 04/01/2014 - 13:57

P { margin-bottom: 0.08in; }A:link { }
-->Johannes Ullrich of the SANS Institute claims to have found malware infecting digital video recorders (DVR) predominately used to record footage captured by surveillance camera systems.

Oddly enough, Ullrich claims that one of the two binaries of malware implicated in this attack scheme appears to be a Bitcoin miner. The other, he says, looks like a HTTP agent that likely makes it easier to download further tools or malware. However, at the present time, the malware seems to only be scanning for other vulnerable devices.

“D72BNr, the bitcoin miner (according to the usage info based on strings) and mzkk8g, which looks like a simplar(sp.) http agent, maybe to download additional tools easily (similar to curl/wget which isn’t installed on this DVR by default),” Ullrich wrote on SANS diary.

The researcher first became aware of the malware last week after he observed Hiksvision DVR (again, commonly used to record video surveillance footage) scanning for port 5000. Yesterday, Ullrich was able to recover the malware samples referenced above. You can find a link to the samples for yourself included in the SANS Diary posting.

Ullrich noted that sample analysis is ongoing with the malware, but that it appears to be an ARM binary, which is an indication that the malware is targeting devices rather than your typical x86 Linux server. Beyond that, the malware is also scanning for Synology (network attached storage) devices exposed on port 5000.

“Using our DShield Sensors, we initially found a spike in scans for port 5000 a while ago,” Ullrich told Threatpost via email. “We associated this with a vulnerability in Synology Diskstation devices which became public around the same time. To further investigate this, we set up some honeypots that simulated Synology’s web admin interface which listens on port 500o.”

Upon analyzing the results from the honeypot, Ullrich says he found a number of scans: some originating from Shodan but many other still originating from these DVRs.

“At first, we were not sure if that was the actual device scanning,” Ullrich admitted. “In NAT (network address translation) scenarios, it is possible that the DVR is visible from the outside, while a different device behind the same IP address originated the scans.”

Further examination revealed that the DVRs in question were indeed originating the scans.

These particular DVRs, Ullrich noted, are used in conjunction with security cameras, and so they’re often exposed to the internet to give employees the ability to monitor the security cameras remotely. Unlike normal “TiVo” style DVRs, these run on a stripped down version of Linux. In this case, the malware was specifically compiled to run in this environment and would not run on a normal Intel based Linux machine, he explained.

This is the Malware sample’s HTTP request:

DVR Malware HTTP Request

The malware is also extracting the firmware version details of the devices it is scanning for. Those requests look like this:

Firmware Scan Request

While Ullrich notes that the malware is merely scanning now, he believes that future exploits are likely.

 

With Extended Random, Cracking Dual EC in BSAFE ‘Trivial’

Tue, 04/01/2014 - 12:56

UPDATE: Known theoretical attacks against TLS using the troubled Dual EC random number generator— something an intelligence agency might try its hand at—are in reality a bit more challenging than we’ve been led to believe.

The addition of the Extended Random extension to RSA Security’s BSAFE cryptographic libraries, for example, where Dual EC is the default random number generator, makes those challenges a moot point for the National Security Agency.

“By adding the extension, cracking Dual EC is trivial for TLS,” said Matt Fredrikson, one of the researchers who yesterday published a paper called “On the Practical Exploitability of Dual EC in TLS Implementations,” which explained the results of a study determining the costs of exploiting the Dual EC RNG where TLS is deployed.

The presence of Extended Random in BSAFE means the incursion into RSA Security by the NSA went beyond the inclusion of a subverted NIST-approved technology, as is alleged in the documents leaked by Edward Snowden, and an alleged $10 million payout by the government. Its presence solidifies that the NSA will leave no stone unturned to ensure its surveillance efforts are successful.

BSAFE was a prime target since it was used by developers not only in commercial and FIPS-approved software, but also in a number of open source packages. An attacker with a presence on the wire, say at an ISP or a key switching point on the Internet, could just passively sit and watch client or server handshake messages and be able to decrypt traffic at a relatively low cost.

Ironically, Extended Random is not turned on by default in BSAFE, and RSA says it is present only in BSAFE Java versions. Fredrikson confirmed the researchers did not see support for the extension compiled into the C/C++ version they studied despite the fact that the BSAFE documentation says it is supported.

“We say as much in the paper: ‘The BSAFE-C library documentation indicates that both watermarking and extended random are supported in some versions of the library; however, the version we have appears to have been compiled without this support,’” he said. “We only had the documentation and compiled libraries to work from–not the source code. If the documentation was mistaken, we would have no clear way of knowing.”

By attacking Dual EC minus Extended Random, the researchers were able to crack the C/C++ version of BSAFE in seconds, whereas Microsoft Windows SChannel and OpenSSL took anywhere from 90 minutes to three hours to crack. In SChannel, for example, less of Dual EC’s output is sent making it more difficult to crack.

“Dual EC, as NIST printed it, allows for additional entropy to be mixed into the computation,” Fredrikson said. “OpenSSL utilizes that alternative, where BSAFE did not. That’s significant because the attacker would have to guess what randomness is given by OpenSSL.”

Dual EC, written by the NSA, was a questionable choice from the start for inclusion in such an important encryption tool as BSAFE. Experts such as Bruce Schneier said it was slower than available alternatives and contained a bias that led many, Schneier included, to believe it was a backdoor.

Extended Random, meanwhile, was an IETF draft proposed by the Department of Defense for acceptance as a standard. Written by Eric Rescorla, an expert involved in the design of HTTPS and currently with Mozilla, Extended Random was never approved as an IETF standard and its window as a draft for consideration has long expired.

Yet, it found its way into BSAFE. In a Reuters article yesterday that broke the story, RSA Security CTO Sam Curry declined to say whether RSA was paid by the NSA to include the extension in BSAFE; he added that it has been removed from BSAFE within the last six months. In September, NIST and RSA recommended that developers move away from using Dual EC in products because it was no longer trustworthy.

The researchers tested Dual EC in BSAFE C, BSAFE Java, Microsoft Windows SChannel I and II and OpenSSL. BSAFE C fell in fewer than four seconds while BSAFE Java took close to 64 minutes; and while Extended Random was not enabled for their experiments, it was simple to extrapolate its impact, the researchers said. They concluded the extension makes Dual EC much less expensive to exploit in BSAFE Java, for example, by a factor of more than 65,000.

The DOD’s reasoning for Extended Random was a claim that the nonces used should be twice as long as the security level, e.g., 256-bit nonces for 128-bit security, the researchers said in the study. Instead, Dual EC’s bias, which already makes it easier for an attacker to guess the randomness of the numbers it generates, is exacerbated by the Extended Random extension which does not enhance the randomness of numbers generated by Dual EC.

“When transmitting more randomness, that translates to faster attacks on session keys,” Fredrikson said. “That’s pretty bad. I haven’t seen anything quite like this.”

This article was updated on April 2 with clarifications throughout.

Why Full Disclosure Still Matters

Tue, 04/01/2014 - 10:58

When the venerable Full Disclosure security mailing list shut down abruptly last month, many in the security community were surprised. But a lot of people, even those who had been members of the list for a long time, greeted the news with a shrug. Twitter, blogs and other outlets had obviated the need for mailing lists, they said. But Fyodor, the man who wrote Nmap, figured there was still a need for a public list where people could share their thoughts openly, so he decided to restart Full Disclosure, and he believes the security community will be better for it.

Mailing lists such as Full Disclosure, Bugtraq and many others once were a key platform for communication and the dissemination of new research and vulnerability information in the security community. Many important discoveries first saw the light of day on these lists and they served as forums for debates over vulnerability disclosure, vendor responses, releasing exploit code and any number of other topics.

But the lists also could be full of flame wars, name-calling and all kinds of other useless chaff. Still, Fyodor, whose real name is Gordon Lyon, said he sees real value in the mailing list model, especially in today’s environment where critical comments or information that a vendor might deem unfavorable can be erased from a social network in a second, never to be seen again.

“Lately web-based forums and social networks have gained in popularity, and they can offer fancy layout and great features such as community rating of posts. But mailing lists still have them beat in decentralization and resiliency to censorship. A mail sent to the new full disclosure list is immediately remailed to more than 7,000 members who then all have their own copy which can’t be quietly retracted or edited.” Fyodor said via email. “And even when John shut down the old list, the messages (more than 91,000) stayed in our inboxes and on numerous web archives such as SecLists.org.  With centralized web systems, the admins can be forced to take down or edit posts, or can lose interest (or suffer a technical failure) and shut down the site, taking down all the old messages with it.

The stated reason for John Cartwright, one of the creators of Full Disclosure, shutting down the list in March after 12 years of operation is that he had tired of dealing with one list member’s repeated requests to remove messages from the list’s archives. Legal threats from vendors and others were not uncommon on Full Disclosure, and Fyodor, who maintains one of the many Full Disclosure mirrors and archives online, said he had received his share of those threats, as well. Asked whether he expected he legal threats to continue, he said he did, but that it wouldn’t matter.

Asked whether he expected he legal threats to continue, he said he did, but that it wouldn’t matter.

“Yes, but we have already been dealing with them as we were already the most popular web archive for the old Full Disclosure list.  Also, this isn’t an ‘everything goes’ forum where people can post blatantly illegal content.  If folks start posting pirated software or other people’s credit card and social security numbers, we’ll take those down from the archive or not let them through in the first place.  But the point of this list is for network security information, and we will stand up against vendors who use legal threats and intimidation to try and hide the evidence of their shoddy and insecure products,” he said.

Since Fyodor rebooted the list last week, it has revived quickly, with researchers returning to posting their advisories and vendors notifying users about new patches. Fyodor said he’s hopeful that the list will continue to have an important place in the community for years to come.

“I think it is important for the community to have a vendor-neutral outlet like this for detailed discussion of security vulnerabilities and exploitation techniques,” he said.

Image from Flickr photos of Thanh Kim

Second NSA Crypto Tool Found in RSA BSafe

Mon, 03/31/2014 - 15:59

A team of academics released a study on the maligned Dual EC DRBG algorithm used in RSA Security’s BSafe and other cryptographic libraries that includes new evidence that the National Security Agency used a second cryptographic tool alongside Dual EC DRBG in Bsafe to facilitate spying.

Allegations in top secret documents leaked by Edward Snowden say the NSA subverted the NIST standards process years ago in order to contribute weaknesses to the Dual EC DRBG algorithm. Reuters then reported in December that RSA Security was paid $10 million to make it the default random number generator in Bsafe. Those libraries are not only in RSA products, but in a good number of commercial and open source software packages.

The paper, “On the Practical Exploitability of Dual EC in TLS Implementations,” concludes that Dual EC can be cracked in short order given its inherent predictability weaknesses in generating random numbers. The inclusion of the Extended Random extension in Bsafe reduced the time required to crack the algorithm exponentially, from three hours on Microsoft Windows SChannel II down to four seconds in Bsafe for C. The researchers also tested OpenSSL’s implementation of Dual EC and found it the most difficult to crack.

A report this morning by Reuters outed the presence of Extended Random in Dual EC DRBG; the extension works contrary to its mission of enhancing the randomness of numbers generated by the algorithm.

Reuters said today that, while use of Extended Random isn’t pervasive, RSA built support for Extended Random in BSafe for Java in 2009. The paper explains how the researchers used $40,000 worth of servers in their experiment and that cracking BSafe for C and BSafe for Java were the most straightforward attacks.

“The BSAFE implementations of TLS make the Dual EC back door particularly easy to exploit in two ways,” the researchers wrote. “The Java version of BSAFE includes fingerprints in connections, making them easy to identify. The C version of BSAFE allows a drastic speedup in the attack by broadcasting longer strings of random bits than one would at first imagine to be possible given the TLS standards.”

Stephen Checkoway, assistant research professor at Johns Hopkins, told Reuters it would have been 65,000 times faster with Extended Random.

RSA Security said it had removed Extended Random within the last six months, but its CTO Sam Curry would not comment on whether the government had paid RSA to include the protocol in BSafe as well.

RSA advised developers in September to move off Dual EC DRBG, one week after NIST made a similar recommendation. But experts were skeptical about the algorithm long before Edward Snowden and surveillance were part of the day-to-day lexicon. In 2007, cryptography experts Dan Shumow and Niels Ferguson gave a landmark presentation on weaknesses in the algorithm, and Bruce Schneier wrote a seminal essay in which is he said the weaknesses in Dual EC DRBG “can only be described as a backdoor.”

Schneier wrote that the algorithm was slow and had a bias, meaning that the random numbers it generates aren’t so random. According to the new paper, assuming the attacker generated the constants in Dual EC—as the NSA would have if it inserted a backdoor into the RNG—would be able to predict future outputs.

“What Shumow and Ferguson showed is that these numbers have a relationship with a second, secret set of numbers that can act as a kind of skeleton key. If you know the secret numbers, you can predict the output of the random-number generator after collecting just 32 bytes of its output,” Schneier wrote in essay. “To put that in real terms, you only need to monitor one TLS Internet encryption connection in order to crack the security of that protocol. If you know the secret numbers, you can completely break any instantiation of Dual_EC_DRBG.

“The researchers don’t know what the secret numbers are,” Schneier said. “But because of the way the algorithm works, the person who produced the constants might know; he had the mathematical opportunity to produce the constants and the secret numbers in tandem.”

Over the weekend, Steve Marquess, founding partner at the OpenSSL Software Foundation, slammed FIPS 140-2 validation testing and speculated that the weaknesses in Dual EC DRBG were carefully planned and executed, likening them to an advanced persistent threat in a post on his personal website. FIPS 140-2 is the government standard against which cryptographic modules are certified.

Marquess said FIPS-140-2 validation prohibits changes to validated modules, calling it “deplorable.”

“That, I think, perhaps even more than rigged standards like Dual EC DRBG, is the real impact of the cryptographic module validation program,” he wrote. “It severely inhibits the naturally occurring process of evolutionary improvement that would otherwise limit the utility of consciously exploited vulnerabilities.”

He offered up the OpenSSL FIPS module as an example where vulnerabilities live on, including Lucky 13 and CVE-2014-0076.

“That’s why I’ve long been on record as saying that ‘a validated module is necessarily less secure than its unvalidated equivalent’, e.g. the OpenSSL FIPS module versus stock OpenSSL,” he said.

Dual EC DRBG, however, is not enabled by default in the OpenSSL FIPS Object Module, but its presence offers an attacker who is on a server by another means the chance to enable it silently.

“As an APT agent you already have access to many target systems via multiple means such as ‘QUANTUM INTERCEPT’ style remote compromises and access to products at multiple points in the supply chain. You don’t want to install ransomware or steal credit card numbers, you want unobtrusive and persistent visibility into all electronic communications,” Marquess wrote. “You want to leave as little trace of that as possible, and the latent Dual EC DRBG implementation in the OpenSSL FIPS module aids discrete compromise. By only overwriting a few words of object code you can silently enable use of Dual EC, whether FIPS mode is actually enabled or not. Do it in live memory and you have an essentially undetectable hack.”

Marquess said the best defense is not to have the code present at all and that the OSF is trying to have it removed from its FIPS Module.