Threatpost for B2B

Syndicate content
The First Stop For Security News
Updated: 10 hours 55 min ago

LinkedIn Goes After Email-Scraping Browser Plug-In

Tue, 04/01/2014 - 14:54

UPDATE: The makers of the controversial Sell Hack browser plug-in responded this afternoon to a cease-and-desist order from LinkedIn and confirmed their extension no longer works on LinkedIn pages and that all of the publicly visible data it had processed from LinkedIn profiles has been deleted.

LinkedIn has sent a cease-and-desist letter Monday night to Sell Hack, a JavaScript-based browser plug-in that scrapes email addresses associated with social media profiles from the web. The company markets that data to sales and marketing professionals.

“We’ve been described as sneaky, nefarious, no good, not ‘legitimate’ amongst other references by some,” the Sell Hack team said. “We’re not. We’re dads from the Midwest who like to build web and mobile products that people use.”

LinkedIn said none of its member data was put at risk by the two-month-old Sell Hack’s plug-in.

According to the Sell Hack website, once the browser extension is installed and a user browses to a social media profile page, a “Hack In” button is visible that will search the web for email addresses that could be associated with a particular profile.

According to another post on the Sell Hack blog: “The magic happens when you click the ‘Hack In’ button. You’ll notice the page slides down and our system starts checking publicly available data sources to return a confirmation of the person’s email address, or our best guesses.”

LinkedIn’s legal team reached out to Sell Hack with its cease-and-desist last night.

“We are doing everything we can to shut Sell Hack down,” said a LinkedIn spokesperson. “Yesterday LinkedIn’s legal team delivered Sell Hack a cease and desist letter as a result of several violations. LinkedIn members who downloaded Sell Hack should uninstall it immediately and contact Sell Hack requesting that their data be deleted.”

While the issue may not be a security vulnerability, since the Snowden leaks began, technology providers are ultra-sensitive about maintaining the privacy of their users’ data, which in this case is being collected and sold without consent.

“We advise LinkedIn members to protect themselves and to use caution before downloading any third-party extension or app,” LinkedIn said. “Often times, as with the Sell Hack case, extensions can upload your private LinkedIn information without your explicit consent.”

LinkedIn is one of a handful of major technology providers who lobbied hard against the government for additional transparency in reporting government requests for user data. Many of those same companies were initially accused of providing the government direct access to servers in order to obtain user data.

Unlike other providers such as Google or Facebook, LinkedIn does not offer Web-based email or storage. Instead, its appeal to the intelligence community was its mapping of connections between its hundreds of millions of members.

LinkedIn called the transparency ban unconstitutional in September; the technology companies eventually won out in January when the Justice Department agreed to ease a gag order that prevented the companies from reporting on national-security-related data requests.

This article was updated on April 1 with additional comments from LinkedIn and the Sell Hack team.

Clapper: NSA Has Searched Databases for Information on U.S. Persons

Tue, 04/01/2014 - 14:18

UPDATE–The NSA searches the data it collects incidentally on Americans, including phone calls and emails, during the course of terrorism investigations. James Clapper, the director of national intelligence, confirmed the searches in a letter to Sen. Ron Wyden, the first time that such actions have been confirmed publicly by U.S. intelligence officials.

Clapper, the head of all U.S. intelligence agencies, said in the letter that the NSA, which is tasked with collecting intelligence on foreign nationals, has searched the data that is has collected on Americans as part of its collection of foreign intelligence. The agency collects some Americans’ data, such as phone calls and emails, in the course of collecting the communications of foreign targets. But it has been unclear until now whether the NSA in fact searches those databases specifically for information on U.S. citizens.

The agency collects some Americans’ data, such as phone calls and emails, in the course of collecting the communications of foreign targets.

Clapper made it clear in his letter that it does.

“As reflected in the August 2013 Semiannual Assessment of Compliance with Procedures and Guidelines Issued Pursuant to Section 702. which we declassified and released on August 21, 2013, there have been queries, using U.S. person identifiers. of communications lawfully acquired to obtain foreign intelligence by targeting non U.S. persons reasonably believed to be located outside the U.S. pursuant to Section 702 of FISA,” Clapper said in a letter sent March 28 to Wyden (D-Ore.).

Wyden, a member of the Senate Intelligence Committee, has been a frequent critic of the NSA and its collection methods in recent years. During a hearing in January, Wyden asked whether the NSA ever had performed queries against its databases looking for information on U.S. citizens. Clapper’s letter was meant as an answer to the question. He did not say in the letter how many such searches the NSA had performed.

Responding to Clapper’s letter Wyden and Sen. Mark Udall (D-Colo.) isued a statement, saying that the DNI’s revelations show that the NSA has been taking advantage of a loophole in the existing law.

“It is now clear to the public that the list of ongoing intrusive surveillance practices by the NSA includes not only bulk collection of Americans’ phone records, but also warrantless searches of the content of Americans’ personal communications,”* Wyden and Udall said. ”This is unacceptable. It raises serious constitutional questions, and poses a real threat to the privacy rights of law-abiding Americans. If a government agency thinks that a particular American is engaged in terrorism or espionage, the Fourth Amendment requires that the government secure a warrant or emergency authorization before monitoring his or her communications. This fact should be beyond dispute.

“ Senior officials have sometimes suggested that government agencies do not deliberately read Americans’ emails, monitor their online activity or listen to their phone calls without a warrant. However, the facts show that those suggestions were misleading, and that intelligence agencies have indeed conducted warrantless searches for Americans’ communications using the ‘back-door search’ loophole in section 702 of the Foreign Intelligence Surveillance Act.”

Section 702 of the Foreign Intelligence Surveillance Act is the measure that governs the way that the NSA can target foreigners for intelligence collection and spells out the methods it must use to ensure that data on Americans or other so-called “U.S. persons” are not collected. The NSA also must take pains to minimize the amount of information it gathers that isn’t relevant to a foreigner who is being targeted.

Clapper said in his letter that the NSA has followed the minimization procedures when it does query its databases on information related to U.S. persons. He also said that Congress had the chance to do away with the agency’s ability to run such queries, and didn’t.

“As you know, when Congress reauthorized Section 702, the proposal to restrict such queries was specifically raised and ultimately not adopted,” the letter says.

This story was updated on April 2 to include the statement from Wyden and Udall.

DVR Infected with Bitcoin Mining Malware

Tue, 04/01/2014 - 13:57

P { margin-bottom: 0.08in; }A:link { }
-->Johannes Ullrich of the SANS Institute claims to have found malware infecting digital video recorders (DVR) predominately used to record footage captured by surveillance camera systems.

Oddly enough, Ullrich claims that one of the two binaries of malware implicated in this attack scheme appears to be a Bitcoin miner. The other, he says, looks like a HTTP agent that likely makes it easier to download further tools or malware. However, at the present time, the malware seems to only be scanning for other vulnerable devices.

“D72BNr, the bitcoin miner (according to the usage info based on strings) and mzkk8g, which looks like a simplar(sp.) http agent, maybe to download additional tools easily (similar to curl/wget which isn’t installed on this DVR by default),” Ullrich wrote on SANS diary.

The researcher first became aware of the malware last week after he observed Hiksvision DVR (again, commonly used to record video surveillance footage) scanning for port 5000. Yesterday, Ullrich was able to recover the malware samples referenced above. You can find a link to the samples for yourself included in the SANS Diary posting.

Ullrich noted that sample analysis is ongoing with the malware, but that it appears to be an ARM binary, which is an indication that the malware is targeting devices rather than your typical x86 Linux server. Beyond that, the malware is also scanning for Synology (network attached storage) devices exposed on port 5000.

“Using our DShield Sensors, we initially found a spike in scans for port 5000 a while ago,” Ullrich told Threatpost via email. “We associated this with a vulnerability in Synology Diskstation devices which became public around the same time. To further investigate this, we set up some honeypots that simulated Synology’s web admin interface which listens on port 500o.”

Upon analyzing the results from the honeypot, Ullrich says he found a number of scans: some originating from Shodan but many other still originating from these DVRs.

“At first, we were not sure if that was the actual device scanning,” Ullrich admitted. “In NAT (network address translation) scenarios, it is possible that the DVR is visible from the outside, while a different device behind the same IP address originated the scans.”

Further examination revealed that the DVRs in question were indeed originating the scans.

These particular DVRs, Ullrich noted, are used in conjunction with security cameras, and so they’re often exposed to the internet to give employees the ability to monitor the security cameras remotely. Unlike normal “TiVo” style DVRs, these run on a stripped down version of Linux. In this case, the malware was specifically compiled to run in this environment and would not run on a normal Intel based Linux machine, he explained.

This is the Malware sample’s HTTP request:

DVR Malware HTTP Request

The malware is also extracting the firmware version details of the devices it is scanning for. Those requests look like this:

Firmware Scan Request

While Ullrich notes that the malware is merely scanning now, he believes that future exploits are likely.


With Extended Random, Cracking Dual EC in BSAFE ‘Trivial’

Tue, 04/01/2014 - 12:56

UPDATE: Known theoretical attacks against TLS using the troubled Dual EC random number generator— something an intelligence agency might try its hand at—are in reality a bit more challenging than we’ve been led to believe.

The addition of the Extended Random extension to RSA Security’s BSAFE cryptographic libraries, for example, where Dual EC is the default random number generator, makes those challenges a moot point for the National Security Agency.

“By adding the extension, cracking Dual EC is trivial for TLS,” said Matt Fredrikson, one of the researchers who yesterday published a paper called “On the Practical Exploitability of Dual EC in TLS Implementations,” which explained the results of a study determining the costs of exploiting the Dual EC RNG where TLS is deployed.

The presence of Extended Random in BSAFE means the incursion into RSA Security by the NSA went beyond the inclusion of a subverted NIST-approved technology, as is alleged in the documents leaked by Edward Snowden, and an alleged $10 million payout by the government. Its presence solidifies that the NSA will leave no stone unturned to ensure its surveillance efforts are successful.

BSAFE was a prime target since it was used by developers not only in commercial and FIPS-approved software, but also in a number of open source packages. An attacker with a presence on the wire, say at an ISP or a key switching point on the Internet, could just passively sit and watch client or server handshake messages and be able to decrypt traffic at a relatively low cost.

Ironically, Extended Random is not turned on by default in BSAFE, and RSA says it is present only in BSAFE Java versions. Fredrikson confirmed the researchers did not see support for the extension compiled into the C/C++ version they studied despite the fact that the BSAFE documentation says it is supported.

“We say as much in the paper: ‘The BSAFE-C library documentation indicates that both watermarking and extended random are supported in some versions of the library; however, the version we have appears to have been compiled without this support,’” he said. “We only had the documentation and compiled libraries to work from–not the source code. If the documentation was mistaken, we would have no clear way of knowing.”

By attacking Dual EC minus Extended Random, the researchers were able to crack the C/C++ version of BSAFE in seconds, whereas Microsoft Windows SChannel and OpenSSL took anywhere from 90 minutes to three hours to crack. In SChannel, for example, less of Dual EC’s output is sent making it more difficult to crack.

“Dual EC, as NIST printed it, allows for additional entropy to be mixed into the computation,” Fredrikson said. “OpenSSL utilizes that alternative, where BSAFE did not. That’s significant because the attacker would have to guess what randomness is given by OpenSSL.”

Dual EC, written by the NSA, was a questionable choice from the start for inclusion in such an important encryption tool as BSAFE. Experts such as Bruce Schneier said it was slower than available alternatives and contained a bias that led many, Schneier included, to believe it was a backdoor.

Extended Random, meanwhile, was an IETF draft proposed by the Department of Defense for acceptance as a standard. Written by Eric Rescorla, an expert involved in the design of HTTPS and currently with Mozilla, Extended Random was never approved as an IETF standard and its window as a draft for consideration has long expired.

Yet, it found its way into BSAFE. In a Reuters article yesterday that broke the story, RSA Security CTO Sam Curry declined to say whether RSA was paid by the NSA to include the extension in BSAFE; he added that it has been removed from BSAFE within the last six months. In September, NIST and RSA recommended that developers move away from using Dual EC in products because it was no longer trustworthy.

The researchers tested Dual EC in BSAFE C, BSAFE Java, Microsoft Windows SChannel I and II and OpenSSL. BSAFE C fell in fewer than four seconds while BSAFE Java took close to 64 minutes; and while Extended Random was not enabled for their experiments, it was simple to extrapolate its impact, the researchers said. They concluded the extension makes Dual EC much less expensive to exploit in BSAFE Java, for example, by a factor of more than 65,000.

The DOD’s reasoning for Extended Random was a claim that the nonces used should be twice as long as the security level, e.g., 256-bit nonces for 128-bit security, the researchers said in the study. Instead, Dual EC’s bias, which already makes it easier for an attacker to guess the randomness of the numbers it generates, is exacerbated by the Extended Random extension which does not enhance the randomness of numbers generated by Dual EC.

“When transmitting more randomness, that translates to faster attacks on session keys,” Fredrikson said. “That’s pretty bad. I haven’t seen anything quite like this.”

This article was updated on April 2 with clarifications throughout.

Why Full Disclosure Still Matters

Tue, 04/01/2014 - 10:58

When the venerable Full Disclosure security mailing list shut down abruptly last month, many in the security community were surprised. But a lot of people, even those who had been members of the list for a long time, greeted the news with a shrug. Twitter, blogs and other outlets had obviated the need for mailing lists, they said. But Fyodor, the man who wrote Nmap, figured there was still a need for a public list where people could share their thoughts openly, so he decided to restart Full Disclosure, and he believes the security community will be better for it.

Mailing lists such as Full Disclosure, Bugtraq and many others once were a key platform for communication and the dissemination of new research and vulnerability information in the security community. Many important discoveries first saw the light of day on these lists and they served as forums for debates over vulnerability disclosure, vendor responses, releasing exploit code and any number of other topics.

But the lists also could be full of flame wars, name-calling and all kinds of other useless chaff. Still, Fyodor, whose real name is Gordon Lyon, said he sees real value in the mailing list model, especially in today’s environment where critical comments or information that a vendor might deem unfavorable can be erased from a social network in a second, never to be seen again.

“Lately web-based forums and social networks have gained in popularity, and they can offer fancy layout and great features such as community rating of posts. But mailing lists still have them beat in decentralization and resiliency to censorship. A mail sent to the new full disclosure list is immediately remailed to more than 7,000 members who then all have their own copy which can’t be quietly retracted or edited.” Fyodor said via email. “And even when John shut down the old list, the messages (more than 91,000) stayed in our inboxes and on numerous web archives such as  With centralized web systems, the admins can be forced to take down or edit posts, or can lose interest (or suffer a technical failure) and shut down the site, taking down all the old messages with it.

The stated reason for John Cartwright, one of the creators of Full Disclosure, shutting down the list in March after 12 years of operation is that he had tired of dealing with one list member’s repeated requests to remove messages from the list’s archives. Legal threats from vendors and others were not uncommon on Full Disclosure, and Fyodor, who maintains one of the many Full Disclosure mirrors and archives online, said he had received his share of those threats, as well. Asked whether he expected he legal threats to continue, he said he did, but that it wouldn’t matter.

Asked whether he expected he legal threats to continue, he said he did, but that it wouldn’t matter.

“Yes, but we have already been dealing with them as we were already the most popular web archive for the old Full Disclosure list.  Also, this isn’t an ‘everything goes’ forum where people can post blatantly illegal content.  If folks start posting pirated software or other people’s credit card and social security numbers, we’ll take those down from the archive or not let them through in the first place.  But the point of this list is for network security information, and we will stand up against vendors who use legal threats and intimidation to try and hide the evidence of their shoddy and insecure products,” he said.

Since Fyodor rebooted the list last week, it has revived quickly, with researchers returning to posting their advisories and vendors notifying users about new patches. Fyodor said he’s hopeful that the list will continue to have an important place in the community for years to come.

“I think it is important for the community to have a vendor-neutral outlet like this for detailed discussion of security vulnerabilities and exploitation techniques,” he said.

Image from Flickr photos of Thanh Kim

Second NSA Crypto Tool Found in RSA BSafe

Mon, 03/31/2014 - 15:59

A team of academics released a study on the maligned Dual EC DRBG algorithm used in RSA Security’s BSafe and other cryptographic libraries that includes new evidence that the National Security Agency used a second cryptographic tool alongside Dual EC DRBG in Bsafe to facilitate spying.

Allegations in top secret documents leaked by Edward Snowden say the NSA subverted the NIST standards process years ago in order to contribute weaknesses to the Dual EC DRBG algorithm. Reuters then reported in December that RSA Security was paid $10 million to make it the default random number generator in Bsafe. Those libraries are not only in RSA products, but in a good number of commercial and open source software packages.

The paper, “On the Practical Exploitability of Dual EC in TLS Implementations,” concludes that Dual EC can be cracked in short order given its inherent predictability weaknesses in generating random numbers. The inclusion of the Extended Random extension in Bsafe reduced the time required to crack the algorithm exponentially, from three hours on Microsoft Windows SChannel II down to four seconds in Bsafe for C. The researchers also tested OpenSSL’s implementation of Dual EC and found it the most difficult to crack.

A report this morning by Reuters outed the presence of Extended Random in Dual EC DRBG; the extension works contrary to its mission of enhancing the randomness of numbers generated by the algorithm.

Reuters said today that, while use of Extended Random isn’t pervasive, RSA built support for Extended Random in BSafe for Java in 2009. The paper explains how the researchers used $40,000 worth of servers in their experiment and that cracking BSafe for C and BSafe for Java were the most straightforward attacks.

“The BSAFE implementations of TLS make the Dual EC back door particularly easy to exploit in two ways,” the researchers wrote. “The Java version of BSAFE includes fingerprints in connections, making them easy to identify. The C version of BSAFE allows a drastic speedup in the attack by broadcasting longer strings of random bits than one would at first imagine to be possible given the TLS standards.”

Stephen Checkoway, assistant research professor at Johns Hopkins, told Reuters it would have been 65,000 times faster with Extended Random.

RSA Security said it had removed Extended Random within the last six months, but its CTO Sam Curry would not comment on whether the government had paid RSA to include the protocol in BSafe as well.

RSA advised developers in September to move off Dual EC DRBG, one week after NIST made a similar recommendation. But experts were skeptical about the algorithm long before Edward Snowden and surveillance were part of the day-to-day lexicon. In 2007, cryptography experts Dan Shumow and Niels Ferguson gave a landmark presentation on weaknesses in the algorithm, and Bruce Schneier wrote a seminal essay in which is he said the weaknesses in Dual EC DRBG “can only be described as a backdoor.”

Schneier wrote that the algorithm was slow and had a bias, meaning that the random numbers it generates aren’t so random. According to the new paper, assuming the attacker generated the constants in Dual EC—as the NSA would have if it inserted a backdoor into the RNG—would be able to predict future outputs.

“What Shumow and Ferguson showed is that these numbers have a relationship with a second, secret set of numbers that can act as a kind of skeleton key. If you know the secret numbers, you can predict the output of the random-number generator after collecting just 32 bytes of its output,” Schneier wrote in essay. “To put that in real terms, you only need to monitor one TLS Internet encryption connection in order to crack the security of that protocol. If you know the secret numbers, you can completely break any instantiation of Dual_EC_DRBG.

“The researchers don’t know what the secret numbers are,” Schneier said. “But because of the way the algorithm works, the person who produced the constants might know; he had the mathematical opportunity to produce the constants and the secret numbers in tandem.”

Over the weekend, Steve Marquess, founding partner at the OpenSSL Software Foundation, slammed FIPS 140-2 validation testing and speculated that the weaknesses in Dual EC DRBG were carefully planned and executed, likening them to an advanced persistent threat in a post on his personal website. FIPS 140-2 is the government standard against which cryptographic modules are certified.

Marquess said FIPS-140-2 validation prohibits changes to validated modules, calling it “deplorable.”

“That, I think, perhaps even more than rigged standards like Dual EC DRBG, is the real impact of the cryptographic module validation program,” he wrote. “It severely inhibits the naturally occurring process of evolutionary improvement that would otherwise limit the utility of consciously exploited vulnerabilities.”

He offered up the OpenSSL FIPS module as an example where vulnerabilities live on, including Lucky 13 and CVE-2014-0076.

“That’s why I’ve long been on record as saying that ‘a validated module is necessarily less secure than its unvalidated equivalent’, e.g. the OpenSSL FIPS module versus stock OpenSSL,” he said.

Dual EC DRBG, however, is not enabled by default in the OpenSSL FIPS Object Module, but its presence offers an attacker who is on a server by another means the chance to enable it silently.

“As an APT agent you already have access to many target systems via multiple means such as ‘QUANTUM INTERCEPT’ style remote compromises and access to products at multiple points in the supply chain. You don’t want to install ransomware or steal credit card numbers, you want unobtrusive and persistent visibility into all electronic communications,” Marquess wrote. “You want to leave as little trace of that as possible, and the latent Dual EC DRBG implementation in the OpenSSL FIPS module aids discrete compromise. By only overwriting a few words of object code you can silently enable use of Dual EC, whether FIPS mode is actually enabled or not. Do it in live memory and you have an essentially undetectable hack.”

Marquess said the best defense is not to have the code present at all and that the OSF is trying to have it removed from its FIPS Module.

Researcher Identifies Potential Security Issues With Tesla S

Mon, 03/31/2014 - 14:41

The current move by auto makers to stuff their vehicles full of networked devices, Bluetooth radios and WiFi connectivity has not gone unnoticed by security researchers. Charlie Miller and Chris Valasek spent months taking apart–literally and figuratively–a Toyota Prius to see what vulnerabilities might lie inside; and they found plenty. Now, another researcher has identified a number of issues with the security of the Tesla S, including its dependence upon a weak one-factor authentication system linked to a mobile app that can unlock the car remotely.

The Tesla S is a high-end, all-electric vehicle that includes a number of interesting features, including a center console touchscreen that controls much of the car’s systems. There also is an iPhone app that allows users to control a number of the car’s functions, including the door locks, the suspension and braking system and sunroof. Nitesh Dhanjani found that when new owners sign up for an account on the Tesla site, they must create a six-character password. That password is then used to login to the iPhone app.

Dhanjani discovered that the Tesla site doesn’t seem to have a function to limit the number of login attempts on a user account, so an attacker potentially could try to brute force a user’s password. An attacker also could phish a user to get her password and then, if he had access to the user’s iPhone, log in to the Tesla app and control the vehicle’s systems. The attacker also could use the Tesla API to check the location of the user’s vehicle, even without the iPhone app.

Dhanjani said that the attacks he’s most concerned about don’t involve brute-forcing, though. He’s more worried about attackers running a phishing campaign against Tesla owners.

“The point here (and subsequent attack vectors) is that Tesla needs to implement an authentication mechanism that is beyond 1-factor. Attackers shouldn’t be able to use traditional and well known attack vectors like phishing to remotely locate and unlock a 100k+ car built in 2014,” he said via email.

“In cases where the attacker is able to hack another website, he or she can use the usernames and credentials from the compromised accounts to attempt them on Tesla’s website and APIs given that users have the tendency to re-use passwords.”

Other possible attack vectors Dhanjani envisioned include an attacker installing malware on a target user’s machine to log his password for the Tesla site or using social-engineering attacks against Tesla employees to have them turn over passwords or remotely unlock a vehicle. The phishing and malware attack vectors are threats that any site that relies on a password faces. But they take on extra importance when the password is associated with something as valuable as a car.

“The Tesla Model S is a great car and a fantastic product of innovation. Owners of Tesla as well as other cars are increasingly relying on information security to protect the physical safety of their loved ones and their belongings. Given the serious nature of this topic, we know we can’t attempt to secure our vehicles the way we have attempted to secure our workstations at home in the past by relying on static passwords and trusted networks. The implications to physical security and privacy in this context have raised stakes to the next level,” Dhanjani said.

Along with the authentication issues, Dhanjani also found that by connecting a laptop to the vehicle through a port in the dashboard, he could identify three separate IP-enabled devices in the vehicle, potentially the dashboard screen, the center console and an unidentified third device. Both the console and the dashboard have a number of services exposed, including SSH and HTTP, and the third device has tlnet exposed, as well.

He said that he has sent the information he gathered to a Tesla employee through a friend and the company is aware of what he’s published, but he hasn’t heard an official response.

Image from Flickr photos of AutoMotoPortal.HR

Google DNS Intercepted in Turkey

Mon, 03/31/2014 - 09:23

Internet service providers in Turkey have been intercepting traffic to Google’s DNS servers and redirecting it, shutting off a workaround that Turkish users had employed to get to sites such as Twitter and YouTube after the government had blocked them.

Google software engineers said they had received credible reports over the weekend about the traffic interceptions, and they had looked into the situation and confirmed the problem themselves. Google operates free DNS servers to help accelerate Web traffic, and its engineers said that Turkish ISPs have set up their own servers that are imitating Google’s.

“Google operates DNS servers because we believe that you should be able to quickly and securely make your way to whatever host you’re looking for, be it YouTube, Twitter, or any other,” Steven Carstensen, software engineer at Google, wrote in a blog post.

“But imagine if someone had changed out your phone book with another one, which looks pretty much the same as before, except that the listings for a few people showed the wrong phone number. That’s essentially what’s happened: Turkish ISPs have set up servers that masquerade as Google’s DNS service.”

Turkish authorities last week began blocking sites such as YouTube and Twitter, saying that there was information on those sites that was harmful to the country’s national security. The Tor service also was blocked in Turkey at certain points last week. One method for blocking a site is for ISPs to redirect requests for those sites to other destinations. Users in Turkey noticed the problem and began working around it by using other DNS servers, including those run by Google.

If ISPs in Turkey are intercepting traffic to Google’s DNS servers, that means that ISPs or the government can redirect those users to whatever site they choose. DNS hijacks sometimes are used by attackers as part of larger attacks, and intercepting DNS traffic also has become a common tactic for governments in some countries that are trying to prevent citizens from reaching certain sites, often social media networks.

Google officials did not say whether they have taken any steps to address the situation in Turkey.

Image from Flickr photos of Bob Mical

WiFi Bug Plagues Philips Internet-Enabled TVs

Fri, 03/28/2014 - 16:53

UPDATE — Some versions of Philips’ internet-enabled SmartTVs are vulnerable to cookie theft and a mélange of other tricks that abuse a lax WiFi setting.

The problem lies in Miracast, a WiFi feature that comes enabled by default, with a fixed password, no PIN, and no request of permission, according to researchers at the Malta-based firm ReVuln.

The vulnerability allows anyone within range of the device’s WiFi adapter to connect to the TV and access its many features. This includes being able to access potentially sensitive information within the TV’s system and configuration files as well as any files that may be on a USB stick connected to the TV. If the user browses the Internet on the same TV, an attacker could also be able to glean some of the cookies used to access certain websites.

The WiFi hole could also open the TV up to a whole mess of hijinks: An attacker could broadcast their own video, audio or images to the TV, and change the channel on a whim, without the viewer being any the wiser.

A video posted by ReVuln’s Luigi Auriemma on Wednesday points out that the default settings are present in the TV’s most recent firmware update, QF2EU-, which allows anyone to connect to the device’s WiFi without authorization and without asking permission. The device’s hardcoded password is just ‘Miracast,’ and after users are connected they are not given the option to set a custom password.

In the proof of concept video Auriemma goes on to steal files from a USB device that’s plugged into the device, along with Gmail cookie files stored in the web browser.

According to ReVuln the vulnerabilities exist in all 2013 models of SmartTV (6, 7, 8, 9xxx) that have the most recent firmware installed.

The WiFi Alliance, a consortium in charge of overseeing all things WiFi said later Friday that it was looking into the vulnerability and have been in touch with Philips regarding the security of Miracast.

“The recent report of a non-compliant passphrase implementation appears to be limited to a single vendor’s implementation,” a statement from the Alliance read Friday. “We enforce the requirements of our certification programs and have been in contact with the company in question to ensure that any device bearing the Miracast mark meets our requirements.”

The vulnerability is the latest in the line of “internet of things” instabilities, software flaws that plague everyday items that connected to the internet such as vehicles, light bulbs and medical devices.

The researchers at ReVuln found a flaw similar to the SmartTV bug in Samsung’s LED 3D TV last year where in an attacker could exploit a vulnerability to retrieve personal information from the device, spy on users and root the TV remotely.

FTC Settles With Fandango, Credit Karma Over SSL Issues in Mobile Apps

Fri, 03/28/2014 - 14:30

The makers of two major mobile apps, Fandango and Credit Karma, have settled with the Federal Trade Commission after the commission charged that they deliberately misrepresented the security of their apps and failed to validate SSL certificates. The apps promised users that their data was being sent over secure SSL connections, but the apps had disabled the validation process.

The settlements with the FTC don’t include any monetary penalties, but both companies have been ordered to submit to independent security audits every other year for the next 20 years and to put together comprehensive security programs.

“Consumers are increasingly using mobile apps for sensitive transactions. Yet research suggests that many companies, like Fandango and Credit Karma, have failed to properly implement SSL encryption,” said FTC Chairwoman Edith Ramirez. “Our cases against Fandango and Credit Karma should remind app developers of the need to make data security central to how they design their apps.”

The FTC complaint against Fandango alleges that the Fandango Movies app on iOS, which enables users to buy movie tickets, included an assertion during checkout telling users that their sensitive information was being sent over a secure connection. However, the app didn’t validate those connections, so users’ financial information was exposed during transmission.

“Before March 2013, Fandango did not test the Fandango Movies application to ensure that the application was validating SSL certificates and securely transmitting consumers’ sensitive personal information. Although Fandango commissioned limited security audits of its applications starting in 2011, more than two years after the release of its iOS application, respondent limited the scope of these security audits to issues presented when the ‘code is decompiled or disassembled,’ i.e., threats arising only from attackers who had physical access to a device. As a result, these audits did not assess whether the iOS application’s transmission of information, including credit card information, was secure,” the FTC complaint says.

The FTC also said that Fandango didn’t have a good process for responding to vulnerability reports from security researchers, leading to the company missing an advisory from a researcher who had discovered the SSL vulnerability.

“In December 2012, a security researcher informed respondent through its Customer Service web form that its iOS application was vulnerable to man-in-the-middle attacks because it did not validate SSL certificates. Because the security researcher’s message included the term “password,” Fandango’s Customer Service system flagged the message as a password reset request and replied with an automated message providing the researcher with instructions on how to reset passwords. Fandango’s Customer Service system then marked the security researcher’s message as “resolved,” and did not escalate it for further review,” the complaint says.

The problems with the Credit Karma app were similar, as it did not validate SSL certificates during supposedly secure connection attempts. The FTC alleges in its complaint that the company failed to validate SSL certificates on both its iOS and Android apps.

“During the iOS application’s development, Credit Karma had authorized its service provider, the application development firm, to use code that disabled SSL certificate validation ‘in testing only,’ but failed to ensure this code’s removal from the production version of the application. As a result, the iOS application shipped to consumers with the SSL certificate validation vulnerability. Credit Karma could have identified and prevented this vulnerability by performing an adequate security review prior to the iOS application’s launch,” the complaint says.

“In February 2013, one month after addressing the vulnerability in its iOS application, Credit Karma launched the Android version of its application, again without first performing an adequate security review or at least testing the application for previously identified vulnerabilities. As a result, like the iOS application before it, the Android application failed to validate SSL certificates, overriding the defaults provided by the Android APIs.”

The FTC’s complaint against Credit Karma also alleges that the app was storing users’ authentication tokens and passcodes in the clear on users’ devices.

Image from Flickr photos of Erik Drost

Cisco Patches Denial-of-Service Vulnerabilities in IOS

Fri, 03/28/2014 - 12:38

Cisco this week patched a handful of denial-of-service vulnerabilities in its IOS software. The security updates are part of a biannual release from Cisco; the next one is due in September.

Five of the six patches handle denial-of-service vulnerabilities in its flagship IOS used in most of its routers and network switches. The sixth patch also repairs a DoS bug, but in its Cisco 7600 Series Route Switch Processor 720 with 10 Gb Ethernet uplinks.

Successful exploits of these bugs could not only crash the networking gear, but also force reboots, Cisco said.

Perhaps the most severe vulnerabilities addressed by Cisco are in IOS’ implementation of network address translation (NAT). The update patched two vulnerabilities that an attacker could use to remotely crash networking gear running IOS. Cisco said the vulnerability is in the Application Layer Gateway module in IOS.

“The vulnerability is due to the way certain malformed DNS packets are processed on an affected device when those packets undergo Network Address Translation (NAT). An attacker could exploit this vulnerability by sending malformed DNS packets to be processed and translated by an affected device,” Cisco said in its advisory. “An exploit could allow the attacker to cause a reload of the affected device that would lead to a DoS condition.”

The second NAT vulnerability is in the TCP Input module that could allow a remote attacker to cause a memory leak or reboot of the flawed device.

“The vulnerability is due to the way certain sequences of TCP packets are processed on an affected device when those packets undergo Network Address Translation (NAT). An attacker could exploit this vulnerability by sending a specific sequence of TCP packets to be processed by an affected device,” Cisco said. “An exploit could allow the attacker to cause a memory leak or reload of the affected device that would lead to a DoS condition.”

Cisco also patched a DoS bug in the IOS SSL VPN subsystem, which fails to process certain HTTP requests. An attacker can send the VPN malicious requests that would consume memory causing it to crash.

“A three-way TCP handshake must be completed for each malicious connection to an affected device; however, authentication is not required,” Cisco said. “The default TCP port number for SSLVPN is 443.”

Cisco also updated the IPv6 protocol stack in IOS and IOS XE to address a vulnerability that could lead to memory consumption. An attacker would need to send a malformed IPv6 request to exploit the bug.

“The vulnerability is due to incorrect processing of crafted IPv6 packets. An attacker could exploit this vulnerability by sending specially crafted IPv6 packets to the affected device,” Cisco said. “An exploit could allow the attacker to trigger I/O memory depletion, causing device instability and could cause a device to reload.”

IOS and IOS XE were also vulnerable to an exploit of a DoS bug in their Internet Key Exchange version 2 module. An IOS device improperly processes malformed IKEv2 packets, enabling an attacker to exploit the bug by sending malformed packets to the device causing it to crash.

The final IOS vulnerability was found in the Session Initiation Protocol implementation of the operating system. A remote attacker could cause IOS to reboot by sending a malicious SIP message if it configured to process SIP messages.

“The vulnerability is due to incorrect processing of specific SIP messages. An attacker could exploit this vulnerability by sending specific SIP messages, which may be considered well-formed or crafted to the SIP gateway,” Cisco said. “An exploit could allow the attacker to trigger a device reload.”

Finally, the patch for the Cisco 7600 Series processor vulnerability addresses a security issue with the Kailash field-programmable gate array (FPGA) versions prior to 2.6, Cisco said.

“An attacker could exploit this vulnerability by sending crafted IP packets to or through the affected device,” Cisco said. “An exploit could allow the attacker to cause the route processor to no longer forward traffic or reboot.”

Image courtesy Lee LeBlanc

Apple Phishing Scam Steals Credentials, Credit Cards

Fri, 03/28/2014 - 12:27

A new email phishing scam is making use of a realistic-looking Apple login page in order to pilfer Apple ID usernames and passwords before moving on to steal user credit card information.

According to SANS Internet Storm Center forums member, Craig Cox, this phishing scam is particularly sophisticated because of its use of JavaScript code that purports to validate whether Apple IDs entered into the malicious field are legitimate. In other words, if a user falls for the trick, but enters the incorrect Apple ID, the site will come back and ask that the user “Double check that [he or she] typed a valid Apple ID.”

Apple ID phishing scam

The malicious domain that the attackers are using here is appleidconfirm[dot]net.

It’s not clear whether the attackers have found a way to distinguish legitimate Apple ID email addresses from a non-existent one. However, once the victim has entered what is considered valid credentials, that person is redirected to another part of the malicious domain (ending in /?2).

On this second page, users are presented with a convincing replica of the actual Apple website. The page requests various pieces of personal information, such as full names, dates of birth, billing addresses, and phone numbers. When and if a victim enters that information and clicks the “verify” button, a window then pops up asking for the user’s payment card information.

Apple ID phishing scam

If the victim decides to enter payment information, he or she will be redirected to the actual Apple website.

According to a technical analysis posted in the ICS Diary write-up, the site responsible for all this tomfoolery was registered just three days ago. The reason the attackers were able to mimic Apple’s interface so accurately is because they didn’t copy its HTML or CSS, but rather overlaid their website with screenshots. Of course, because of this method, the scam becomes a bit of a dead giveaway when and if a user attempts to follow any links on the masquerading site. Another dead giveaway, the post notes, is the lack of HTTPS, which Apple would  deploy if it were asking users to provide sensitive information.

A similar scam emerged last week when attackers compromised a server belonging to EA Games and modified it to look like an Apple log-in page, which they then used in a phishing attack designed to steal Apple ID credentials.

U.S. Government Seeks Laxer Hacking Rules for Law Enforcement

Fri, 03/28/2014 - 11:06

The federal government is looking for a way to relax the laws to make it simpler for law enforcement agents to target and compromise the computers of suspects involved in criminal cases. The Department of Justice has forwarded a request to the body that considers such changes, asking that judges in one district be allowed to issue warrants for remote access operations in that district–or any other.

The change, first reported by the Wall Street Journal, would be a major one, allowing investigators to obtain warrants from a given judge to conduct remote access attacks against suspects’ machines in any other district in the United States. The government’s request also seeks the ability to obtain one warrant that would apply to several computers, as in a large-scale botnet investigation.

“The Department of Justice recommends an amendment to Rule 41 of the Federal Rules of Criminal Procedure to update the provisions relating to the territorial limits for searches of electronic storage media. The amendment would establish a court-supervised framework through which law enforcement can successfully investigate and prosecute sophisticated Internet crimes, by authorizing a court in a district where activities related to a crime have occurred to issue a warrant – to be executed via remote access – for electronic storage media and electronically stored information located within or outside that district,” Mythili Raman, acting assistant attorney general, wrote in a letter supporting the change.

“The proposed amendment would better enable law enforcement to investigate and prosecute botnets and crimes involving Internet anonymizing technologies, both which pose substantial threats to members of the public.”

In a document that lays out the government’s reasoning for the request, which will be considered in two weeks, the government gives a couple examples of the types of investigations that could benefit from this change. One of the examples is a warrant request in an investigation into a child pornography ring that was hosting a site as a Tor hidden service.

“The second example is based on a warrant used in an investigation of a child pornography website operating as a ‘hidden service’ on the Tor network. Tor masks its users’ actual IP addresses by routing their communications through a distributed network o f relay computers run by volunteers around the world. In this case, law enforcement knew the physical location of the server used to host the hidden service. However, without use of a NIT, investigators could not identify the administrators or users of the hidden service. This warrant would authorize the collection of IP addresses, MAC addresses, and other similar information from users and administrators o f the website,” Jonathan J. Wroblewski, director of Justice’s Office of Policy and Legislation, write in a letter to the chair of the subcommittee considering the rule change.

The letter also includes a sample affidavit in support of a warrant request that describes a “network investigative technique”–the government’s euphemism for hacking–that closely resembles a watering hole attack.

“I make this affidavit in support of an application under Rule 41 of the Federal Rules of Criminal Procedure for a warrant to use a network investigative technique (“NIT”) on computers that access Website A, identified by Tor URL example.onion (collectively, TARGET COMPUTERS), as further described in this affidavit and its attachments, in order to search the TARGET COMPUTERS for the information described in Attachment B,” the sample affidavit says.

The proposed change will be considered by the U.S. Judicial Conference April 7-8.

Critical Vulnerabilities Patched in Schneider Electric Serial Modbus Driver

Fri, 03/28/2014 - 10:34

Schneider Electric, a leading provider of industrial control systems, recently patched a remotely exploitable vulnerability in a driver found in 11 of its products.

The Industrial Control Systems Computer Emergency Response Team (ICS-CERT) released an advisory yesterday alerting users to the availability of a patch and warning of the consequences associated with the stack-based buffer overflow vulnerability found in Schneider’s Serial Modbus Driver, ModbusDrv.exe.

The driver is started when a programmable logic controller is connected to the serial port on a server. It creates a listener on TCP port 27700, and when a connection is made the Modbus Application Header is read into a buffer, the ICS-CERT advisory said.

If the header is too large, a stack-based overflow results. The advisory cautions that a second overflow vulnerability is also exploitable by overwriting the return address. By doing so, an attacker could execute code remotely.

The vulnerable software driver is used across a gamut of industries, including chemicals, manufacturing, energy, nuclear reactors, government facilities, dams and transportation systems, primarily in the United States, Europe and China.

ICS-CERT said it is not aware of any public exploits. The patch is available from Schneider Electric.

ICS-CERT said the following Schneider products contain the vulnerable Modbus driver:

  • TwidoSuite Versions 2.31.04 and earlier,
  • PowerSuite Versions 2.6 and earlier,
  • SoMove Versions 1.7 and earlier,
  • SoMachine Versions 2.0, 3.0, 3.1, and 3.0 XS,
  • Unity Pro Versions 7.0 and earlier,
  • UnityLoader Versions 2.3 and earlier,
  • Concept Versions 2.6 SR7 and earlier,
  • ModbusCommDTM sl Versions 2.1.2 and earlier,
  • PL7 Versions 4.5 SP5 and earlier,
  • SFT2841 Versions 14, 13.1 and earlier, and
  • OPC Factory Server Versions 3.50 and earlier.

“The affected products are mostly software-based utilities and engineering tools designed for programming and configuring process, machine, and general control applications,” the ICS-CERT advisory said. “These applications rely on a common driver to communicate with PLCs.”

This is the third time this year that ICS-CERT has issued an alert about vulnerabilities in Schneider Electric gear. In January, an advisory was sent out about a remotely exploitable resource consumption vulnerability that was patched in Schneider’s ClearSCADA software. ClearSCADA is secure remote management software designed for use in large, geographically dispersed critical infrastructure systems.

In March, the company patched vulnerabilities in Schneider OPC Factory Server, which is an interface for client applications that require access to production data in real time. The buffer overflow flaws were not remotely exploitable, yet could allow an attacker with local access to run malicious programs on a computer running the vulnerable server software.

White House Releases Proposal to End Section 215 Bulk Collection

Thu, 03/27/2014 - 14:47

The White House today unveiled a five-point plan to end the National Security Agency’s bulk collection of phone call metadata, preserving what it says is a balance between the intelligence community’s national security needs and the public’s desire to maintain its privacy.

The proposal ends the government’s collection of phone records under Section 215 of the PATRIOT Act as it exists today, keeping that data with telecommunications providers who will store those records for 18 months as they are currently federally mandated to do.

The government would have access to the records only under approval from the secret Foreign Intelligence Surveillance Court (FISC), which must approve the querying of a suspect phone number and only after judicial approval based on a national security concern.

Currently, the NSA collects and stores call metadata, and maps connections between numbers belonging to individuals suspected of terrorism or threatening national security. As the Snowden leaks began last June, the depths of NSA surveillance, including dragnet capturing of all Americans’ phone calls without warrants, drew the ire of civil libertarians, mainstream media and politicians on both sides of the aisle.

The new plan was ordered by President Obama during a Jan. 17 address to the nation on surveillance. During that speech, he ordered the Attorney General and the intelligence community to work together on an adequate solution that would alter the collection of data under Section 215. Obama imposed a March 28 deadline for the proposal, the day FISC is expected to renew the NSA program for another 90-day cycle, the final time it will do so.

The White House proposal, hints of which were released two days ago in a New York Times report, also changes the number of hops the government will be able to collect between suspects from three to two. While apparently a concession, ACLU National Security advisor and attorney Brett Max Kaufman told Threatpost this remains a red flag for privacy advocates.

“It’s unclear, if the government is able to satisfy FISC’s standard of a reasonable, articulable suspicion, why anyone connected to that person would also satisfy that same standard to get their call records?” Kaufman said.

The president’s proposal was a bit more stringent than a similar House Intelligence Committee bill that was introduced on Tuesday, which did not require prior judicial approval; a judge would rule on a request only after the FBI submits it to a provider.

Verizon general counsel Randal Milch said the provider supports the efforts to end bulk collection.

“At this early point in the process, we propose this basic principle that should guide the effort: the reformed collection process should not require companies to store data for longer than, or in formats that differ from, what they already do for business purposes,” Milch said. “If Verizon receives a valid request for business records, we will respond in a timely way, but companies should not be required to create, analyze or retain records for reasons other than business purposes.”

The final two provisions of today’s official proposal say the court-approved numbers can only be used for a limited period of time without again requiring approval from FISC. “The production of records would be ongoing and prospective,” the proposal said.

Also, under court order, the phone companies would be required to provide technical assistance to ensure the records can be accessed in a timely fashion and in an accessible format.

The White House plan would need to be ratified by Congress in order to go into effect, and because of this, the Department of Justice will seek another 90-day renewal from FISC for the program, much to the chagrin of experts.

“EPIC is encouraged by the President’s continued commitment to end the bulk collection program … however, the renewal of the FISC order on Friday would be a disappointing development,” said Alan Butler, appellate advocacy counsel for the Electronic Privacy Information Center (EPIC). “The bulk collection program will not end until the FISC order expires without the President seeking its renewal.”

New Platform Protects Data From Arbitrary Server Compromises

Thu, 03/27/2014 - 14:43

Researchers are in the midst of rolling out a secure new platform for building web applications that can protect confidential data from being stolen in the event attackers gain full access to servers.

The platform, Mylar, is the result of a project spearheaded by students at the Massachusetts Institute of Technology (M.I.T.) set to be discussed at USENIX’s Symposium on Networked Systems Design and Implementation conference next week in Seattle.

According to a paper – “Building web applications on top of encrypted data using Mylar” (.PDF) – , the platform can encrypt data on servers and decrypt it in users’ browsers, provided they have the correct key.

As it is, there are several ways in which data can be leaked from servers: Attackers could exploit a vulnerability and break in; a prying admin could overstep their bounds; or a server operator could be forced to disclose data by law.

While Mylar’s goal is to keep confidential data safe by preventing these incidents from happening, it does so by operating under the premise that the server where the data is stored has already been hacked.

“Mylar assumes that any part of the server can be compromised, either as a result of software vulnerabilities or because the server operator is untrustworthy, and protects data confidentiality in this setting,” according to the paper.

Raluca Ada Popa, the paper’s lead author and a Ph.D. Candidate at the school’s Department of Electrical Engineering and Computer Science, worked with six colleagues from the school’s Computer Science and Artificial Intelligence Laboratory (CSAIL) for nearly two years on the project.

The report makes light of recent privacy-minded applications such as Mega and Cryptocat, but acknowledges that while those apps allow users to decrypt information from servers via browsers with special keys, they still have their drawbacks.

Or as a description of the platform on M.I.T.’s website puts it, “simply encrypting each user’s data with a user key does not suffice.”

Mainly it’s because these apps don’t allow data sharing, they make keyword searches difficult and perhaps most concerning, they can still be tricked into letting the server extract user keys and data via malicious code.

To allow data sharing on Mylar, a special mechanism establishes the correctness of keys obtained from the server – backed up by X.509 certificate paths – to ensure that a server that has been compromised cannot trick the app into using a bogus key. This allows multiple users, with keys, to share the same item.

To verify app code, Mylar keeps application code and data separate, checking to make sure code it runs is properly signed by the website owner, something that in turn keeps HTML pages that are supplied by the server static.

While many schemes require document data be encrypted by a single key, this prevents easy keyword searches. A unique cryptographic scheme in Mylar allows clients to search through many documents with multiple encryption keys for keywords and without even learning what the word is or learning the contents of the documents, Mylar can return a list of instances of that word.

Mylar owes a lot to this specialized search scheme; something Ada Popa claims she discovered last May and helped get the ball rolling on the platform soon after.

Ada Popa and her team started working on the project in 2012 but it would take another year and a half to truly come to fruition. The researchers initially tried to build the framework over Django and Ruby on Rails before realizing the way the two platforms are designed made them incompatible with what they were looking for from a encryption and confidentiality standpoint.

In the summer of 2013, the group realized that the more secure Meteor, an emerging, open source web framework was their best option. Developers from Meteor helped the team test the software and it wasn’t long after until Ada Popa came up with the multikey search scheme, pieced together from elliptic curves, and they were off.

Three months later — a few design tweaks here and there — and Mylar was complete.

According to the paper, if adopted, the platform would require little effort by developers. The researchers ported six applications over to Mylar and only needed 36 additional lines of code on average, per app, to protect sensitive data.

The six apps that researchers have tested Mylar on so far consist of a website that lets endometriosis patients record their symptoms, a website for managing homework and grades, a chat application, a forum, a calendar and a photo sharing app.

It might not be long until Mylar catches on with some of those apps in real life.

Two of those apps, the medical app, and the website that lets professors at M.I.T. manage homework and grades; actually plan on implementing Mylar in the immediate future.

Endometriosis patients at Newton-Wellesley Hospital, a medical center in Newton, Mass., tested the medical app a month ago. According to Ada Popa, in another month or so it should be out of alpha deployment following approval from the Institutional Review Board (IRB). Since the app is transferring highly sensitive patient information, she wouldn’t be surprised if the review period took a little bit longer than usual however.

Professors in CSAIL’s Computer Systems Security classes have successfully used an app running on Mylar for managing student’s homework and grade information.

Still though, while the researchers stress that Mylar isn’t perfect, it does work providing users follow a modicum of responsibility when it comes to privacy and security.

While Mylar’s main goal is to protect data from being compromised in arbitrary server compromises, conventional wisdom assumes users are not running the framework on a compromised machine and sharing information with untrustworthy users. Mylar also assumes users are checking to make sure they use the HTTPS version of the site/app they’re using and can safely recognize phishing attacks.

While it sounds promising for PC usage, the platform could also have a future on Android systems. The researchers claim they’ve tested Mylar on phones running the Google operating system but left the results out of their paper for brevity sake.

“Mylar’s techniques for searching over encrypted data and for verifying keys are equally applicable to desktop and mobile phone applications; the primary difference is that code verification becomes simpler, since applications are explicitly installed by the user, instead of being downloaded at application start time,” according to the paper.

The team’s research was aided by a handful of firms including Google, the National Science Foundation, and DARPA’s Clean-Slate Design of Resilient, Adaptive, Secure Hosts (CRASH) program – a program dedicated to crafting cyber-attack resistant systems.

This is the latest piece of software designed by Ada Popa, who considers Mylar the follow up to CryptDB, a piece of software she devised in 2011 that more or less did the same thing that Mylar does, but for databases.

“We started working on this project as a natural next step after the previous project, CryptDB, which did the same for databases,” Ada Popa said, “We realized that web applications are an even more common use case for placing on a cloud or on a compromised server.”

CryptDB encrypted information and ran SQL queries without decrypting the database. Some of Ada Popa’s CryptDB research even found its way into a system Google released later that year,Encrypted BigQuery, that can run SQL-like queries against large, multi-terabyte datasets.

Ada Popa plans to present Mylar in USENIX’s Security and Privacy session next Wednesday and demonstrate the platform later that afternoon alongside one of the paper’s co-authors, Jonas Helfer.

NTP Amplification, SYN Floods Drive Up DDoS Attack Volumes

Thu, 03/27/2014 - 13:23

There has been a steady but dramatic increase in the potency of distributed denial of service (DDoS) attacks from the beginning of 2013 through the first two months of this year. In large part, reason for this rise in volume has to do with the widespread adoption of two attack methods: large synchronization packet flood (SYN flood) attacks and network timing protocol (NTP) amplification attacks.

According to an Incapsula report tracking the DDoS threat landscape during this 14-month period of time, the largest such attacks in February 2013 were delivering traffic at a rate of four gigabytes per second (Gbps). By July 2013, 60 Gbps and larger DDoS attacks had become a weekly occurrence. In February of 2014, Incapsula reports having witnessed one NTP amplification attack peaking at 180 Gbps. Other reports have found the volume of NTP amplification attacks as high as 400 Gbps.

DDoS Attacks Increase in Volume

“As early as February 2013 we were able to track down a single source 4Gbps attacking server, which – if amplified – could alone have generated over 200Gbps in attack traffic,” the report claims. “With such available resources it is easy to explain the uptick in attack volume we saw over the course of the year.”

At present, large scale DDoS attacks, which Incapsula defines as those of 20 Gbps and more, account for more nearly one-third of all attacks. Attackers are able to achieve these high volumes by launching large SYN floods and DNS and NTP amplification attacks.

Types of DDoS Attacks

A new entrant to the DDoS landscape is a technique called “hit and run” DDoS attacks. These attacks first emerged in April 2013, and, according to Incapsula, target human-controlled DDoS protections by exploiting weaknesses in services that are supposed to be manually triggered, like generic routing encapsulation tunneling and domain name server re-routing.

Not only is each classification of DDoS attack becoming more potent, but 81 percent of attacks exploit multiple vectors.

“Multivector tactics increase the attacker’s chance of success by targeting several different networking or infrastructure resources,” Incapsula claims. “Combinations of different offensive techniques are also often used to create ‘smokescreen’ effects, where one attack is used to create noise, diverting attention from another attack vector.” Furthermore, multivector attacks can be used for trial and error style reconnaissance as well.

Multi-Vector DDoS Attacks

The most commonly deployed attacks are a combination of two types of SYN floods – one deploying regular SYN packets and another using large SYN (above 250 bytes) packets.

“In this scenario, both attacks are executed at the same time, with the regular SYN packets used to exhaust server resources (e.g., CPU) and large SYN packets used to cause network saturation,” they say. “Today SYN combo attacks account for ~75% of all large scale network DDoS events (attacks peaking above 20Gbps). Overall, large SYN attacks are also the single most commonly used attack vector, accounting for 26% of all network DDoS events.”

However, in February 2014, NTP amplification attacks surpassed all others as the most commonly seen form of DDoS. This may be the beginning of a new trend or merely a temporary spike, but as the report notes, it is too early to tell.

Government Requests for Google User Data Continue to Climb

Thu, 03/27/2014 - 12:37

While the number of requests for user information that Google receives from governments around the world continues to rise–climbing by 120 percent in the last four years–the company is turning over some data  in fewer cases as time goes on. Google received more than 27,000 requests for user information from global law enforcement agencies in the last six months of 2013 and provided some user data in 64 percent of those cases.

The new report from Google includes information on requests for user data from governments around the world, as well as new data on National Security Letters sent by the United States government to Google. In the second half of 2013, Google received between 0-999 NSLs, the same range it reported in all of the previous periods, going back to January 2009. However, those letters affected more users or accounts this time, between 1000-1999, up from 0-999 in the first six months of 2013.

The U.S. government only allows companies to report NSLs in ranges of 1,000. The Google transparency report also includes data on orders from the Foreign Intelligence Surveillance Court, but that information is subject to a six-month delay, so there is no data for June through December 2013. In the first six months of last year, Google received 0-999 content request and the same number of non-content requests.

As usual, the U.S. was the largest contributor to the volume of requests for user data that Google reported, sending 10,574 requests, covering 18,254 accounts. France was second, with 2,750 requests for information about 3,378 accounts. Germany, India, the U.K. and Brazil followed.

“Government requests for user information in criminal cases have increased by about 120 percent since we first began publishing these numbers in 2009. Though our number of users has grown throughout the time period, we’re also seeing more and more governments start to exercise their authority to make requests,” Richard Salgado, Legal Director, Law Enforcement and Information Security at Google, wrote in a blog post on the report.

“We consistently push back against overly broad requests for your personal information, but it’s also important for laws to explicitly protect you from government overreach. That’s why we’re working alongside eight other companies to push for surveillance reform, including more transparency. We’ve all been sharing best practices about how to report the requests we receive, and as a result our Transparency Report now includes governments that made less than 30 requests during a six-month reporting period, in addition to those that made 30+ requests.”

When Google first began reporting the percentage of user data requests that it complies with in some way in 2010, the company reported providing some information in 76 percent of cases. That number has decreased steadily in the years since, down to the 64 percent Google complied with in some way in the second half of 2013.

Malware Hijacks Android Mobile Devices to Mine Cryptocurrency

Thu, 03/27/2014 - 11:44

On its surface, the idea of turning a smartphone into a cryptocurrency mining machine sounds novel. But practical and profitable? Not so much.

That hasn’t stopped thieves from corrupting a number of popular Android applications for just that purpose, including two on the Google Play store called Songs and Prized; Songs has been downloaded a million times.

Several versions exist too of the CoinKrypt malware, said researchers at mobile security company Lookout. The malicious CoinKrypt apps, Lookout said, have been confined to forums in Spain and France that distribute pirated software.

CoinKrypt is an add-on to a legitimate app and hijacks an Android phone’s resources—which are limited for this purpose to begin with—in order to mine Litecoin, Dogecoin, and Casinocoin.

Desktop computers, for example, have much more resources that can be dedicated for this purpose than a mobile device, and yet are still insufficient to mine coins for profit.

People do mine coins, rather than buy them, using purpose-built software to do so. Essentially, people who mine are lending their machine’s processing power for the purpose, and in return are rewarded with a new coin.

Mining digital currency, however, does come with some gotchas, especially on a mobile device. Namely, mining can be a resource hog and will quickly drain battery life, overheat hardware causing damage, or can exhaust a user’s data plan by downloading a blockchain, or transaction history, which can be gigabytes in size.

Lookout experts said that CoinKrypt does not include a feature that is native to other mining software which controls the rate at which coins are mined in order to preserve the hardware from damage. This may also be why the attackers are staying away from mining Bitcoins, which despite being far more valuable, are much more difficult to mine.

“This leads us to believe this criminal is experimenting with malware that can take advantage of lower-hanging digital currency fruit that might yield more coins with less work,” said Marc Rogers, a researcher with Lookout. “With the price of a single Bitcoin at $650 and other newer currencies such as Litecoin approaching $20 for a single coin we are in the middle of a digital gold rush. CoinKrypt is the digital equivalent of a claim jumper.”

Rogers said it’s almost one million times easier to mine Litecoin than Bitcoin; 3.5 million times easier to mine Dogecoin.

“When we tested the feasibility of mining using a Nexus 4 by using Android mining software such as the application ‘AndLTC,’ we were only able to attain a rate of about 8Kh/s – or 8,000 hash calculations per second, the standard unit of measure for mining,” Rogers said. “Using a Litecoin calculator and the difficulty setting mentioned above we can see that this would net us 0.01 LTC after seven days non-stop mining. That’s almost 20 cents.”

Other samples, Rogers said, have been targeting newer digital coins in order to avoid these issues.

Researchers at G Data Software also found mining software embedded in a version of the TuneIn Radio Pro app on the Google Play store. The Trojan, dubbed MuchSad, mines Dogecoin in addition to serving streaming radio to the user.

“The malicious functionality is put on hold when the user of the smartphone or tablet is using it. When the malicious app is first launched, a service called ‘Google Service’ is initialized,” researchers at G Data said. “After five seconds, and thereafter every twenty minutes, this checks whether the user is actively using the device. If the device is free – not in use – the malicious app starts to ‘mine’ Dogecoins for the attacker.”

In three days, the attacker was able to mine nearly 1,900 Dogecoins, or about $6.

“The only clues that might quickly raise a user’s suspicions are the increased battery usage and the heat from the mobile phone, due to the constant high load at times when the user is not actively using the device. You can even see the battery consumption in the Android system logs,” G Data researchers said. “However, the ‘Google Service’ disguise will very probably come into play again here. Barely a single user will question such battery consumption, assuming it is a system process.”

Image courtesy BT Keychain

Data Breaches Show Difficulty of Defenders’ Task

Thu, 03/27/2014 - 11:14

When attackers broke into the network of the University of Maryland last month, the university’s wasn’t sure how to react. The organization had never had a major security incident before, and this one qualified as major: 310,000 Social Security numbers and other information was gone. And then three weeks later, it happened again.

Wallace Loh, the president of the University of Maryland, told the Senate Commerce Committee Wednesday that the university’s security and IT team was caught off guard when the attackers infiltrated the college’s network on Feb. 18. The attackers made their initial intrusion into the network by uploading a piece of malware to one of the university’s Web sites that is designed to allow users to upload photos. Once on the network, the attackers began to move laterally and eventually ended up finding the directory for the university’s IT management team and was able to change the passwords they found there.

The attackers, who had come in over the Tor network to hide their identity and location, then located a database that stored Social Security numbers of students, alumni and others, as well as university IDs, and downloaded 310,000 of them.

“It turns out, because we’ve never been hacked before, we were just flying by the seat of our pants,” Loh told the committee in his testimony.

Within 24 hours of discovering the breach, the university had disclosed the breach publicly, contacted credit-monitoring services and begun notifying the people who were affected by the breach. The university got in touch with the FBI, who came in to investigate the attack. Three weeks later, while the FBI was still digging through the details of the Feb. 18 breach, attackers again compromised Maryland’s network and had access to quite a bit of sensitive information, more than was at risk during the first attack, in fact. This time, however, the attackers simply posted one victim’s personal details to Reddit as a show of force before the FBI investigators were able to mitigate the attack.

In the wake of the first attack, Loh said that the university’s IT team had taken a number of steps to harden its network and ensure that the organization was no longer storing data it didn’t need.

“We have migrated almost all of our Web sites to the cloud,” he said. “What we have done immediately is purge almost all unnecessary data. We have purged approximately two hundred and twenty-five thousand names from our records. We have isolated sensitive information. And the cost is very, very high.”

That cost is one that many organizations around the country are feeling. Target, the victim of one of the larger breaches in history last year, is still feeling the repercussions from the attack, which affected more than 100 million people. John Mulligan, the vice president and CFO of Target, also spoke before the Commerce Committee Wednesday, and said that the company is going through many of the same machinations that Maryland did, including increasing segmentation on its networks. Mulligan also said that the company is expanding its use of two-factor authentication on its networks and will, by early next year, begin issuing and accepting chip-enabled credit cards.

The Target data breach and the attack on the University of Maryland illustrate a truism that many in the security industry have known for years.

“The people who play offense will always be one step ahead of those who play defense,” Loh said.