Threatpost for B2B

Syndicate content
The First Stop For Security News
Updated: 3 hours 46 min ago

New Platform Protects Data From Arbitrary Server Compromises

Thu, 03/27/2014 - 14:43

Researchers are in the midst of rolling out a secure new platform for building web applications that can protect confidential data from being stolen in the event attackers gain full access to servers.

The platform, Mylar, is the result of a project spearheaded by students at the Massachusetts Institute of Technology (M.I.T.) set to be discussed at USENIX’s Symposium on Networked Systems Design and Implementation conference next week in Seattle.

According to a paper – “Building web applications on top of encrypted data using Mylar” (.PDF) – , the platform can encrypt data on servers and decrypt it in users’ browsers, provided they have the correct key.

As it is, there are several ways in which data can be leaked from servers: Attackers could exploit a vulnerability and break in; a prying admin could overstep their bounds; or a server operator could be forced to disclose data by law.

While Mylar’s goal is to keep confidential data safe by preventing these incidents from happening, it does so by operating under the premise that the server where the data is stored has already been hacked.

“Mylar assumes that any part of the server can be compromised, either as a result of software vulnerabilities or because the server operator is untrustworthy, and protects data confidentiality in this setting,” according to the paper.

Raluca Ada Popa, the paper’s lead author and a Ph.D. Candidate at the school’s Department of Electrical Engineering and Computer Science, worked with six colleagues from the school’s Computer Science and Artificial Intelligence Laboratory (CSAIL) for nearly two years on the project.

The report makes light of recent privacy-minded applications such as Mega and Cryptocat, but acknowledges that while those apps allow users to decrypt information from servers via browsers with special keys, they still have their drawbacks.

Or as a description of the platform on M.I.T.’s website puts it, “simply encrypting each user’s data with a user key does not suffice.”

Mainly it’s because these apps don’t allow data sharing, they make keyword searches difficult and perhaps most concerning, they can still be tricked into letting the server extract user keys and data via malicious code.

To allow data sharing on Mylar, a special mechanism establishes the correctness of keys obtained from the server – backed up by X.509 certificate paths – to ensure that a server that has been compromised cannot trick the app into using a bogus key. This allows multiple users, with keys, to share the same item.

To verify app code, Mylar keeps application code and data separate, checking to make sure code it runs is properly signed by the website owner, something that in turn keeps HTML pages that are supplied by the server static.

While many schemes require document data be encrypted by a single key, this prevents easy keyword searches. A unique cryptographic scheme in Mylar allows clients to search through many documents with multiple encryption keys for keywords and without even learning what the word is or learning the contents of the documents, Mylar can return a list of instances of that word.

Mylar owes a lot to this specialized search scheme; something Ada Popa claims she discovered last May and helped get the ball rolling on the platform soon after.

Ada Popa and her team started working on the project in 2012 but it would take another year and a half to truly come to fruition. The researchers initially tried to build the framework over Django and Ruby on Rails before realizing the way the two platforms are designed made them incompatible with what they were looking for from a encryption and confidentiality standpoint.

In the summer of 2013, the group realized that the more secure Meteor, an emerging, open source web framework was their best option. Developers from Meteor helped the team test the software and it wasn’t long after until Ada Popa came up with the multikey search scheme, pieced together from elliptic curves, and they were off.

Three months later — a few design tweaks here and there — and Mylar was complete.

According to the paper, if adopted, the platform would require little effort by developers. The researchers ported six applications over to Mylar and only needed 36 additional lines of code on average, per app, to protect sensitive data.

The six apps that researchers have tested Mylar on so far consist of a website that lets endometriosis patients record their symptoms, a website for managing homework and grades, a chat application, a forum, a calendar and a photo sharing app.

It might not be long until Mylar catches on with some of those apps in real life.

Two of those apps, the medical app, and the website that lets professors at M.I.T. manage homework and grades; actually plan on implementing Mylar in the immediate future.

Endometriosis patients at Newton-Wellesley Hospital, a medical center in Newton, Mass., tested the medical app a month ago. According to Ada Popa, in another month or so it should be out of alpha deployment following approval from the Institutional Review Board (IRB). Since the app is transferring highly sensitive patient information, she wouldn’t be surprised if the review period took a little bit longer than usual however.

Professors in CSAIL’s Computer Systems Security classes have successfully used an app running on Mylar for managing student’s homework and grade information.

Still though, while the researchers stress that Mylar isn’t perfect, it does work providing users follow a modicum of responsibility when it comes to privacy and security.

While Mylar’s main goal is to protect data from being compromised in arbitrary server compromises, conventional wisdom assumes users are not running the framework on a compromised machine and sharing information with untrustworthy users. Mylar also assumes users are checking to make sure they use the HTTPS version of the site/app they’re using and can safely recognize phishing attacks.

While it sounds promising for PC usage, the platform could also have a future on Android systems. The researchers claim they’ve tested Mylar on phones running the Google operating system but left the results out of their paper for brevity sake.

“Mylar’s techniques for searching over encrypted data and for verifying keys are equally applicable to desktop and mobile phone applications; the primary difference is that code verification becomes simpler, since applications are explicitly installed by the user, instead of being downloaded at application start time,” according to the paper.

The team’s research was aided by a handful of firms including Google, the National Science Foundation, and DARPA’s Clean-Slate Design of Resilient, Adaptive, Secure Hosts (CRASH) program – a program dedicated to crafting cyber-attack resistant systems.

This is the latest piece of software designed by Ada Popa, who considers Mylar the follow up to CryptDB, a piece of software she devised in 2011 that more or less did the same thing that Mylar does, but for databases.

“We started working on this project as a natural next step after the previous project, CryptDB, which did the same for databases,” Ada Popa said, “We realized that web applications are an even more common use case for placing on a cloud or on a compromised server.”

CryptDB encrypted information and ran SQL queries without decrypting the database. Some of Ada Popa’s CryptDB research even found its way into a system Google released later that year,Encrypted BigQuery, that can run SQL-like queries against large, multi-terabyte datasets.

Ada Popa plans to present Mylar in USENIX’s Security and Privacy session next Wednesday and demonstrate the platform later that afternoon alongside one of the paper’s co-authors, Jonas Helfer.

NTP Amplification, SYN Floods Drive Up DDoS Attack Volumes

Thu, 03/27/2014 - 13:23

There has been a steady but dramatic increase in the potency of distributed denial of service (DDoS) attacks from the beginning of 2013 through the first two months of this year. In large part, reason for this rise in volume has to do with the widespread adoption of two attack methods: large synchronization packet flood (SYN flood) attacks and network timing protocol (NTP) amplification attacks.

According to an Incapsula report tracking the DDoS threat landscape during this 14-month period of time, the largest such attacks in February 2013 were delivering traffic at a rate of four gigabytes per second (Gbps). By July 2013, 60 Gbps and larger DDoS attacks had become a weekly occurrence. In February of 2014, Incapsula reports having witnessed one NTP amplification attack peaking at 180 Gbps. Other reports have found the volume of NTP amplification attacks as high as 400 Gbps.

DDoS Attacks Increase in Volume

“As early as February 2013 we were able to track down a single source 4Gbps attacking server, which – if amplified – could alone have generated over 200Gbps in attack traffic,” the report claims. “With such available resources it is easy to explain the uptick in attack volume we saw over the course of the year.”

At present, large scale DDoS attacks, which Incapsula defines as those of 20 Gbps and more, account for more nearly one-third of all attacks. Attackers are able to achieve these high volumes by launching large SYN floods and DNS and NTP amplification attacks.

Types of DDoS Attacks

A new entrant to the DDoS landscape is a technique called “hit and run” DDoS attacks. These attacks first emerged in April 2013, and, according to Incapsula, target human-controlled DDoS protections by exploiting weaknesses in services that are supposed to be manually triggered, like generic routing encapsulation tunneling and domain name server re-routing.

Not only is each classification of DDoS attack becoming more potent, but 81 percent of attacks exploit multiple vectors.

“Multivector tactics increase the attacker’s chance of success by targeting several different networking or infrastructure resources,” Incapsula claims. “Combinations of different offensive techniques are also often used to create ‘smokescreen’ effects, where one attack is used to create noise, diverting attention from another attack vector.” Furthermore, multivector attacks can be used for trial and error style reconnaissance as well.

Multi-Vector DDoS Attacks

The most commonly deployed attacks are a combination of two types of SYN floods – one deploying regular SYN packets and another using large SYN (above 250 bytes) packets.

“In this scenario, both attacks are executed at the same time, with the regular SYN packets used to exhaust server resources (e.g., CPU) and large SYN packets used to cause network saturation,” they say. “Today SYN combo attacks account for ~75% of all large scale network DDoS events (attacks peaking above 20Gbps). Overall, large SYN attacks are also the single most commonly used attack vector, accounting for 26% of all network DDoS events.”

However, in February 2014, NTP amplification attacks surpassed all others as the most commonly seen form of DDoS. This may be the beginning of a new trend or merely a temporary spike, but as the report notes, it is too early to tell.

Government Requests for Google User Data Continue to Climb

Thu, 03/27/2014 - 12:37

While the number of requests for user information that Google receives from governments around the world continues to rise–climbing by 120 percent in the last four years–the company is turning over some data  in fewer cases as time goes on. Google received more than 27,000 requests for user information from global law enforcement agencies in the last six months of 2013 and provided some user data in 64 percent of those cases.

The new report from Google includes information on requests for user data from governments around the world, as well as new data on National Security Letters sent by the United States government to Google. In the second half of 2013, Google received between 0-999 NSLs, the same range it reported in all of the previous periods, going back to January 2009. However, those letters affected more users or accounts this time, between 1000-1999, up from 0-999 in the first six months of 2013.

The U.S. government only allows companies to report NSLs in ranges of 1,000. The Google transparency report also includes data on orders from the Foreign Intelligence Surveillance Court, but that information is subject to a six-month delay, so there is no data for June through December 2013. In the first six months of last year, Google received 0-999 content request and the same number of non-content requests.

As usual, the U.S. was the largest contributor to the volume of requests for user data that Google reported, sending 10,574 requests, covering 18,254 accounts. France was second, with 2,750 requests for information about 3,378 accounts. Germany, India, the U.K. and Brazil followed.

“Government requests for user information in criminal cases have increased by about 120 percent since we first began publishing these numbers in 2009. Though our number of users has grown throughout the time period, we’re also seeing more and more governments start to exercise their authority to make requests,” Richard Salgado, Legal Director, Law Enforcement and Information Security at Google, wrote in a blog post on the report.

“We consistently push back against overly broad requests for your personal information, but it’s also important for laws to explicitly protect you from government overreach. That’s why we’re working alongside eight other companies to push for surveillance reform, including more transparency. We’ve all been sharing best practices about how to report the requests we receive, and as a result our Transparency Report now includes governments that made less than 30 requests during a six-month reporting period, in addition to those that made 30+ requests.”

When Google first began reporting the percentage of user data requests that it complies with in some way in 2010, the company reported providing some information in 76 percent of cases. That number has decreased steadily in the years since, down to the 64 percent Google complied with in some way in the second half of 2013.

Malware Hijacks Android Mobile Devices to Mine Cryptocurrency

Thu, 03/27/2014 - 11:44

On its surface, the idea of turning a smartphone into a cryptocurrency mining machine sounds novel. But practical and profitable? Not so much.

That hasn’t stopped thieves from corrupting a number of popular Android applications for just that purpose, including two on the Google Play store called Songs and Prized; Songs has been downloaded a million times.

Several versions exist too of the CoinKrypt malware, said researchers at mobile security company Lookout. The malicious CoinKrypt apps, Lookout said, have been confined to forums in Spain and France that distribute pirated software.

CoinKrypt is an add-on to a legitimate app and hijacks an Android phone’s resources—which are limited for this purpose to begin with—in order to mine Litecoin, Dogecoin, and Casinocoin.

Desktop computers, for example, have much more resources that can be dedicated for this purpose than a mobile device, and yet are still insufficient to mine coins for profit.

People do mine coins, rather than buy them, using purpose-built software to do so. Essentially, people who mine are lending their machine’s processing power for the purpose, and in return are rewarded with a new coin.

Mining digital currency, however, does come with some gotchas, especially on a mobile device. Namely, mining can be a resource hog and will quickly drain battery life, overheat hardware causing damage, or can exhaust a user’s data plan by downloading a blockchain, or transaction history, which can be gigabytes in size.

Lookout experts said that CoinKrypt does not include a feature that is native to other mining software which controls the rate at which coins are mined in order to preserve the hardware from damage. This may also be why the attackers are staying away from mining Bitcoins, which despite being far more valuable, are much more difficult to mine.

“This leads us to believe this criminal is experimenting with malware that can take advantage of lower-hanging digital currency fruit that might yield more coins with less work,” said Marc Rogers, a researcher with Lookout. “With the price of a single Bitcoin at $650 and other newer currencies such as Litecoin approaching $20 for a single coin we are in the middle of a digital gold rush. CoinKrypt is the digital equivalent of a claim jumper.”

Rogers said it’s almost one million times easier to mine Litecoin than Bitcoin; 3.5 million times easier to mine Dogecoin.

“When we tested the feasibility of mining using a Nexus 4 by using Android mining software such as the application ‘AndLTC,’ we were only able to attain a rate of about 8Kh/s – or 8,000 hash calculations per second, the standard unit of measure for mining,” Rogers said. “Using a Litecoin calculator and the difficulty setting mentioned above we can see that this would net us 0.01 LTC after seven days non-stop mining. That’s almost 20 cents.”

Other samples, Rogers said, have been targeting newer digital coins in order to avoid these issues.

Researchers at G Data Software also found mining software embedded in a version of the TuneIn Radio Pro app on the Google Play store. The Trojan, dubbed MuchSad, mines Dogecoin in addition to serving streaming radio to the user.

“The malicious functionality is put on hold when the user of the smartphone or tablet is using it. When the malicious app is first launched, a service called ‘Google Service’ is initialized,” researchers at G Data said. “After five seconds, and thereafter every twenty minutes, this checks whether the user is actively using the device. If the device is free – not in use – the malicious app starts to ‘mine’ Dogecoins for the attacker.”

In three days, the attacker was able to mine nearly 1,900 Dogecoins, or about $6.

“The only clues that might quickly raise a user’s suspicions are the increased battery usage and the heat from the mobile phone, due to the constant high load at times when the user is not actively using the device. You can even see the battery consumption in the Android system logs,” G Data researchers said. “However, the ‘Google Service’ disguise will very probably come into play again here. Barely a single user will question such battery consumption, assuming it is a system process.”

Image courtesy BT Keychain

Data Breaches Show Difficulty of Defenders’ Task

Thu, 03/27/2014 - 11:14

When attackers broke into the network of the University of Maryland last month, the university’s wasn’t sure how to react. The organization had never had a major security incident before, and this one qualified as major: 310,000 Social Security numbers and other information was gone. And then three weeks later, it happened again.

Wallace Loh, the president of the University of Maryland, told the Senate Commerce Committee Wednesday that the university’s security and IT team was caught off guard when the attackers infiltrated the college’s network on Feb. 18. The attackers made their initial intrusion into the network by uploading a piece of malware to one of the university’s Web sites that is designed to allow users to upload photos. Once on the network, the attackers began to move laterally and eventually ended up finding the directory for the university’s IT management team and was able to change the passwords they found there.

The attackers, who had come in over the Tor network to hide their identity and location, then located a database that stored Social Security numbers of students, alumni and others, as well as university IDs, and downloaded 310,000 of them.

“It turns out, because we’ve never been hacked before, we were just flying by the seat of our pants,” Loh told the committee in his testimony.

Within 24 hours of discovering the breach, the university had disclosed the breach publicly, contacted credit-monitoring services and begun notifying the people who were affected by the breach. The university got in touch with the FBI, who came in to investigate the attack. Three weeks later, while the FBI was still digging through the details of the Feb. 18 breach, attackers again compromised Maryland’s network and had access to quite a bit of sensitive information, more than was at risk during the first attack, in fact. This time, however, the attackers simply posted one victim’s personal details to Reddit as a show of force before the FBI investigators were able to mitigate the attack.

In the wake of the first attack, Loh said that the university’s IT team had taken a number of steps to harden its network and ensure that the organization was no longer storing data it didn’t need.

“We have migrated almost all of our Web sites to the cloud,” he said. “What we have done immediately is purge almost all unnecessary data. We have purged approximately two hundred and twenty-five thousand names from our records. We have isolated sensitive information. And the cost is very, very high.”

That cost is one that many organizations around the country are feeling. Target, the victim of one of the larger breaches in history last year, is still feeling the repercussions from the attack, which affected more than 100 million people. John Mulligan, the vice president and CFO of Target, also spoke before the Commerce Committee Wednesday, and said that the company is going through many of the same machinations that Maryland did, including increasing segmentation on its networks. Mulligan also said that the company is expanding its use of two-factor authentication on its networks and will, by early next year, begin issuing and accepting chip-enabled credit cards.

The Target data breach and the attack on the University of Maryland illustrate a truism that many in the security industry have known for years.

“The people who play offense will always be one step ahead of those who play defense,” Loh said.

NSA Reforms Demonstrate Value of Public Debate

Wed, 03/26/2014 - 12:04

The Snowden leaks and the ensuing critical spotlight shone on the National Security Agency’s surveillance programs have nudged many technologists, privacy hounds and politicians away from their desks and onto the front lines calling for reforms.

Two nights ago, the New York Times reported that President Obama responded to those calls and would soon reveal a new legislative proposal that would end the agency’s bulk collection of phone call records. While shy on many important details, the move demonstrates that public debate still holds some sway with policy makers.

“The only [turning point] was disclosure of the program,” said Brett Max Kaufman, National Security Fellow and attorney with the American Civil Liberties Union. “Since that day, this has almost been inevitable because the claims the government made in secret to FISC [Foreign Intelligence Surveillance Court] and to Congress were never given a fair hearing from the other side. Once these programs became public and the government had to defend them in a court of law and in the court of public opinion, it was clear that these claims made in secret could not withstand arguments from civil libertarians and the public at large.”

While the president’s proposal carries the weight of the White House, it addresses only the NSA’s collection of phone metadata, and none of the other alleged surveillance activities made public by the Snowden documents. Other bills, such as the USA FREEDOM Act, extend beyond phone records to digital information collected through what’s known as the PRISM program under section 702 of the Foreign Intelligence Surveillance Act (FISA) and provide for enhanced oversight over intelligence gathering.

The Electronic Frontier Foundation, one of the most vocal advocacy groups opposing government surveillance of Americans, applauded the White House proposal yesterday, but endorsed the FREEDOM Act. The EFF called it “a giant step forward” and said it was a more favorable proposal than the president’s or another introduced by the House Intelligence Committee yesterday.

“Or better still, we urge the Administration to simply decide that it will stop misusing section 215 of the Patriot Act and section 702 of the FISA Amendments Act and Executive Order 12333 and whatever else it is secretly relying on to stop mass spying,” said EFF legal director Cindy Cohn and EFF legislative analyst Mark M. Jaycox. “The executive branch does not need congressional approval to stop the spying; nothing Congress has done compels it to engage in bulk collection.  It could simply issue a new Executive Order requiring the NSA to stop.”

The president’s proposal would end the NSA’s collection and storage of phone data; those records would remain with the providers and the NSA would require judicial permission under a new court order to access those records. The House bill, however, requires no prior judicial approval; a judge would rule on the request after the FBI submits it to the telecommunications company.

“It’s absolutely crucial to understand the details of how these things will work,” the ACLU’s Kaufman said in reference to the “new court order” mentioned in the New York Times report. “There is no substitute for robust Democratic debate in the court of public opinion and in the courts. The system of oversight is broke and issues like these need to be debated in public.”

Phone metadata and dragnet collection of digital data from Internet providers and other technology companies is supposed to be used to map connections between foreigners suspected of terrorism and threatening the national security of the U.S. The NSA’s dragnet, however, also swept up communication involving Americans that is supposed to be collected and accessed only with a court order. The NSA stood by claims that the program was effective in stopping hundreds of terror plots against U.S. interests domestic and foreign. Those numbers, however, quickly were lowered as they were challenged by Congressional committees and public scrutiny.

“The president said the effectiveness of this program was one of the reasons it was in place,” Kaufman said. “But as soon as these claims were made public, journalists, advocates and the courts pushed back and it could not withstand the scrutiny. It’s remarkable how quickly [the number of] plots turned into small numbers. The NSA was telling FISC the program was absolutely necessary to national security, but the government would not go nearly that far in defending the program. That shows the value of public debate and an adversarial process in courts.”

GUI Vulnerabilities Expose Information Disclosure, Privilege Escalation

Wed, 03/26/2014 - 11:14

Developers are creating countless information disclosure and privilege escalation vulnerabilities by misusing elements of various graphical user interfaces as a mechanisms for access control, according to a new research paper from the Northeastern University College of Computer and Information Science.

The paper – coauthored by Collin Mulliner, William Robertson, and Engin Kirda – explores GUI element misuse (GEM). Essentially, the researchers assert that GUIs are the primary conduit through which users interact with computer programs. Each contains a unique list of visual elements – referred to in the paper as widgets. The “GUIs typically provide the ability to set attributes in these widgets to control their visibility, enabled status, and whether they are writable.”

While these attributes are helpful for users, in the context of GUI-based applications with multiple levels of privilege, they are very often misused to control who has access to which information under what circumstances. The researchers claim that fairly unsophisticated attackers can use easily accessible programming utilities such as WinSpy++ or Spy++ in order to select, view, and modify any window in a system, including the hierarchy the widgets within those windows.

“For example, a developer might disable a text field if a user is not authorized to enter any input into the backfield database via the user interface,” the researchers wrote in their paper. “Generally speaking developers might start to rely on user interface element attributes to enforce privilege levels within the application. Unfortunately, these user interface attributes are not suitable as an access control mechanism.”

In other words, attackers could potentially modify widgets in order to achieve deeper access to information stored by or made accessible to the application deploying that widget. Problematically, such attacks can be simultaneously easy to perform but difficult to detect.

“We see this class of bugs likely to appear in custom applications within (big) organizations (enterprises),” Mulliner told Threatpost in an email interview. “GEM bugs only exist in applications that provide multiple privilege levels within the application, these type of enterprise applications most likely handle sensitive data.”

The key point of this class of bugs, Mulliner continued, is that it requires only minimal skill to exploit them. An attack, he claims, is reduced to manipulating one or multiple user interface widgets, which is easier than reverse engineering an application’s binary, database format, or network protocol.

“Manipulating a widget is easily carried out by using one of many point and click tools that are freely available on the Internet,” Mulliner said. “Therefore, allowing an average user to bypass access controls to gain access to sensitive data and even the ability to modify it.”

The researchers also developed what they are calling a GEM mining tool that automatically detects insecurely configured GUIs. This specific tool works only on Windows systems (though they believe that this type of vulnerability is in no way limited to Windows) and the trio identified a number of previously unknown GEM bugs in Windows-platform software, which they have reported to the appropriate vendors.

In order to be considered for analysis by their GEM miner, an application has to offer at least two levels of privilege, because an attacker obviously cannot escalate his or her position if there is only one level of privilege (such is a prerequisite for GEM bugs).

Specifically, the researchers used this tool to root out an assortment of different bugs of varying degrees of severity in three separate applications, two of which remain unnamed. The first, an inventory management tool, is vulnerable in such a way that an unauthorized user could create new entries and delete existing ones within the application’s database. Worse yet, the researchers say, the application’s account management window, which is initiated at start-up and merely hidden from view, is totally accessible to an attacker. Without much difficulty, an attacker can modify the visibility attribute of this window (again, using readily available tools) and directly access user credentials. However, if a user were to attempt to log in as the admin, a similar attack could be launched against the admin login window, after which point the attacker would merely need to convince the admin to log in or simply wait for that to happen.

In the second app, used for employee and project management, an attacker could modify the visibility of a certain hidden window, achieving access to an entire employee database consisting of work schedules, business trips, and vacation days. Leveraging a separate callback GEM bug, the attacker would also have the capacity to change this information.

The third app tested in their experiment – named Proffix – is one that provides client management, order processing, and financial accounting functions. In this case, the vendor has acknowledged the bug and admitted it was not aware of GEM issues. The GEM vulnerabilities discovered by the researchers in this application could give an attacker the ability to modify data stored in the application’s database that normal users are not authorized to modify, completely bypassing the application’s intended access control scheme.

Milluner, Robertson, and Kirda will discuss their findings next month at the IEEE Security and Privacy conference in San Jose, California.

Security the Facebook Way

Wed, 03/26/2014 - 11:03

Protecting the internal network as well as the users of Facebook is an unenviable task. Facebook users constantly are the target of all manner of phishing, malware and other attacks, and the company’s own network is a major prize for attackers, as well. To help better defend those assets, Facebook’s security team has built an internal framework known as ThreatData that sucks up and processes massive amounts of threat information and helps the company respond more quickly to emerging threats.

Many large enterprises build their own custom security tools and analysis systems, but the details of those systems typically never see the light of day. Under the old security saw that publishing any information about your defensive methods is tantamount to giving aid and comfort to the enemy, most companies prefer to remain silent about what they’re doing on this front. But Facebook has on a couple of occasions been quite open about the way it protects its internal networks and Facebook users. Last year at the CanSecWest conference, a pair of Facebook security employees detailed a complex red team exercise that the company ran in preparation for a real-world hack.

And now the company is providing an inside look at the ThreatData framework that Facebook security engineers created to help them stay abreast of the new malware and phishing threats that emerge every day. Facebook’s answer was to build a set of feeds of malicious URLS, malware hashes and other information, store it in a database that has a couple of custom search capabilities and then push the data through a custom processing engine to look for new threats that need immediate responses.

“The ThreatData framework is comprised of three high-level parts: feeds, data storage, and real-time response. Feeds collect data from a specific source and are implemented via a light-weight interface. The data can be in nearly any format and is transformed by the feed into a simple schema we call a ThreatDatum. The datum is capable of storing not only the basics of the threat (e.g., but also the context in which it was bad. The added context is used in other parts of the framework to make more informed, automatic decisions,” Mark Hammel, a threat researcher at Facebook, wrote in an explanation of the framework.

The ThreatData framework consumes feeds from VirusTotal, malicious URL repositories, paid data from vendors and other sources, the data is pushed into the company’s Hive and Scuba data repositories. Hive helps analysts answer long-term queries about whether the system has seen a specific piece of malware before, while Scuba is focused on shorter-term problems, like emerging phishing site clusters. Facebook’s team can then implement various responses, such as sending any new malicious URLs to a blacklist that is used on Facebook.

The system has enabled the Facebook security team to identify some ongoing attacks that were affecting the company’s users, as well as other victims. One example is an odd spam campaign last summer that was sending links from fake Facebook accounts that led users to a piece of malware designed to infect some feature phones. The malware had the ability to steal contact list data and send premium-rate SMS spam messages.

“With this discovery, we were able to analyze the malware, disrupt the spam campaign, and work with partners to disrupt the botnet’s infrastructure,” Hammel said.

“We realize that not all aspects of this approach are entirely novel, but we wanted to share what has worked for us to help spark new ideas. We’ve found that the framework lets us easily incorporate fresh types of data and quickly hook into new and existing internal systems, regardless of their technology stack or how they conceptualize threats.”

Image from Flickr photos of Coletivo Mambembe

Full Disclosure List Rises From the Ashes For Fresh Start

Wed, 03/26/2014 - 08:10

When the Full Disclosure mailing list closed down last week, many in the security community wondered what, if anything, would fill the void. As it turns out, Full Disclosure will fill that void.

John Cartwright, one of the creators of the list, announced on March 19 that he was shutting it down after growing tired of requests from a particular user to remove some archived messages. Cartwright said he had endured years of legal threats from vendors and other issues associated with maintaining a list that often included zero day vulnerability information and exploit code, and he had had enough of it.

“I’m not willing to fight this fight any longer.  It’s getting harder to operate an open forum in today’s legal climate, let alone a security-related one.  There is no honour amongst hackers any more. There is no real community.  There is precious little skill.  The entire security game is becoming more and more regulated.  This is all a sign of things to come, and a reflection on the sad state of an industry that should never have become an industry,” Cartwright wrote.

But now, Fyodor, the creator of the Nmap network scanner, has stepped in and started a new version of Full Disclosure that will carry on in the same vein as the original list.  Fyodor, whose real name is Gordon Lyon, said in an announcement of the new list that he had talked with Cartwright about starting a new list, and Cartwright had told him to go ahead if he so desired.

When I mailed John recently asking how I could help, he said he was through with the list but “if you want to start a replacement, go for it.”  So here we are.  I already deal with (or ignore) many legal threats and removal demands since I’ve long run the most popular Full Disclosure web archive, and I already run mail servers and Mailman software for my other lists (like Nmap dev and Nmap announce).  I love the Full Disclosure philosophy and movement, so I’ve started a new list!” Fyodor wrote in the announcement of the new list.

Users will need to re-subscribe to the new Full Disclosure list, but Fyodor said that he envisions the new list being a successor in spirit to the original one and being a resource for the security community.

“The new list must be run by and for the security community in a vendor-neutral fashion. It will be lightly moderated like the old list, and a volunteer moderation team will be chosen from the active users. As before, this will be a public forum for detailed discussion of vulnerabilities and exploitation techniques, as well as tools, papers, news, and events of interest to the community,” Fyodor wrote. “FD differs from other security lists in its open nature, light (versus restrictive) moderation, and support for researchers’ right to decide how to disclose their own discovered bugs. The full disclosure movement has been credited with forcing vendors to better secure their products and to publicly acknowledge and fix flaws rather than hide them. Vendor legal intimidation and censorship attempts won’t be tolerated!”

Photo by Jacob Appelbaum.

MH 370-Related Phishing Attacks Spotted Against Government Targets

Tue, 03/25/2014 - 16:04

Hold off on the notion that watering hole attacks may supplant phishing as the initial means of compromise in advanced attacks. A number of recent targeted campaigns have used the crash of Malaysia Airlines 370 as a lure to infect government officials in the U.S. and Asia-Pacific.

FireEye today published research on a number of spear phishing attacks that contained either infected attachments or links to malicious websites. One Chinese group, admin@338, has been active in the past targeting international financial firms that have expertise in analyzing global economic policies. Two days after flight 370 was reported missing, a spear phishing email was sent to government officials in Asia-Pacific, FireEye said, with an attachment referring to the missing airliner.

Users who clicked on the attachment saw a blank document, while in the background a variant of the Poison Ivy Trojan was installing and eventually established a backdoor to www[.]verizon[.]proxydns[.]com. This group has used both Poison Ivy and this domain in previous attacks, FireEye said.

Poison Ivy has some miles on it, but security researchers say hacker groups, in particular some with ties to China, continue to make use of it. The malware is a remote access Trojan that allows attackers to not only set up backdoor communication with infected machines, but push additional malicious code, steal documents and system information, and pivot internally.

FireEye said it monitored a second attack from the admin@338 group which targeted a “U.S.-based think tank” on March 14. The malicious attachment pretended to be a Flash video related to the missing plane and attached a Flash icon to the executable, researchers said.

This version of Poison Ivy connected to its command and control at dpmc[.]dynssl[.]com:443 and www[.]dpmc[.]dynssl[.]com:80, FireEye said, adding that the phony Verizon domain used in the first attack also resolved to an IP used by this attack as well.

Admin@338 is not the only hacker group using the Malaysia tragedy to its advantage. On March 9, a malicious executable disguised as a PDF connected to a command and control server at net[.]googlereader[.]pw:443. The victim is shown a phony PDF purporting to be a CNN story about the disappearance of the flight.

Three more samples were detected that used a Word document, or an executable, disguised as a .DOC extension, dropping an exploit for CVE-2012-0158 used in the IceFog, NetTraveler and Red October APT campaigns reported by Kaspersky Lab. All of these exploits behaved similarly, targeting high-value victims with backdoor connections.

Basecamp Online After DDoS Attack, Extortion Attempt

Tue, 03/25/2014 - 14:52

The project management console Basecamp is back online and its developers are in the process of restoring customers’ network access Tuesday after the service was taken down by a distributed denial-of-service (DDoS) attack Monday.

The attack started at 8:46 a.m. CST yesterday and flooded the site with 20 gigabits of data per second, and took it and all of its services offline for a few hours according to a David Heinemeier Hansson, a partner at Basecamp.

Hansson, the Danish programmer who also created the Ruby on Rails development framework, described the attack in a Github gist Monday.

“We’re doing everything we can with the help of our network providers to mitigate this attack and halt the interruption of service. We’re also contacting law enforcement to track down the criminals responsible. But in the meantime, it might be a rough ride, and for that we’re deeply sorry,” Hansson wrote at the time.

According to a subsequent note on Signal v. Noise, no data was compromised in the attack, but Hansson lamented that users weren’t able to get to their data when they needed, calling it unacceptable.

As with any DDoS attack, the attackers flooded their services with requests, the attack shares some similarities with an a DDoS that affected social networking site just over two weeks ago.

Like the Meetup incident, the attack against Basecamp was launched following a blackmail attempt wherein Basecamp could have paid to mitigate the attack. According to Hansson the blackmail came from someone who “hit others just last week” and came from an email matching the following address: “dari***”

It’s unclear if Hansson is referring to as the “others” hit last week, as the attack took place three weeks ago.

The attackers behind’s DDoS demanded $300 to keep the site online. In a blog post the site’s CEO and co-founder Scott Heiferman claimed the company refused to honor it as to avoid setting a nasty precedent, yet was still hit with a series of attacks that kept their site offline nearly the entire weekend.

Like Meetup, Basecamp acknowledged it would never give in to blackmail.

“The only thing we’re certain of is that, like Meetup, we will never negotiate by criminals, and we will not succumb to blackmail. That would only set us up as an easy target for future attacks,” Hansson said.

While the attack appears to have stopped for now Basecamp claims “there’s no guarantee” it won’t resume and that it’s “remaining on the highest alert for now.” At last update Basecamp developers were able to restore service for about 95 percent of its customers yesterday morning but were still dealing with a variety of extenuating network issues.

While the group is still investigating the attack with law enforcement, it does plan on posting a technical postmortem of the attack within 48 hours, barring it isn’t attacked again.

Largely headquartered in Chicago, Basecamp is a web-based project management infrastructure that lets developers outline to-do lists, share files and message colleagues back and forth.

White House Proposal Would End NSA Metadata Program

Tue, 03/25/2014 - 13:45

Privacy advocates are cautiously applauding the reports that the Obama administration will unveil a legislative proposal to end the National Security Agency’s collection of Americans’ bulk phone records, but are concerned what the fine print on that proposal might hold.

“Given all the various ways that the NSA has overreached, piecemeal change is not enough,” said Electronic Frontier Foundation legal director Cindy Cohn and EFF legislative analyst Mark M. Jaycox.

A report in the New York Times last night explains that the administration wants to end the intelligence agency’s collection and storage of phone records; those instead would stay with telecommunications providers who would continue to store them for 18 months as they are federally mandated to do. The NSA currently stores those records for five years, longer if the agency is unable to crack an encrypted communication from a suspect.

The NSA would need a judge’s OK under a new proposed court order to be able to access those records going forward, the Times article said. The legislative proposal is a response to a March 28 deadline imposed by the president in January during a speech in which he promised significant NSA reforms to the agency’s collection and use of phone call metadata. The metadata—records of calls made to and from a suspect, and their duration—is used to map connections between foreigners thought to be involved in terrorism. The NSA’s collection of metadata admittedly also ensnares call data from Americans, which is supposed to be outside the scope of the program and obtainable only with a court order.

“This is just one example of where these surveillance programs were approved in secret by all three branches of government, but could not withstand public scrutiny, so much so that the president has come around and decided reform is necessary,” said Brett Max Kaufman, National Security Fellow and attorney with the American Civil Liberties Union. “Today he apparently abandoned that need to engage in bulk collection; that shows that secret policies will not hold up to public scrutiny.”

On Friday, the current authorization from the secret Foreign Intelligence Surveillance Court which authorizes the collection program is set to expire. Under the proposal, the president is expected to ask the court to renew the program for another 90-day cycle, administration officials told the Times. Once those 90 days are up, reforms are expected to kick in.

The House Intelligence Committee today also drafted a bill that would end the NSA program and keep phone records with providers. Similar to the president’s proposal, the House bill would prevent the NSA from collecting records of phone calls and Internet activity, but unlike the White House proposal, it would not require a judge’s prior approval for a phone number before that request is submitted to a provider, the Washington Post said. A judge would have to rule on the request after the FBI submits the request to the provider.

“If there is no judicial authorization beforehand, I don’t see the civil liberties community getting behind it,” Harley Geiger, senior counsel for the Center for Democracy & Technology, told the Washington Post.

The USA FREEDOM Act seeks to ban the collection of any records, including Internet searches, email and other electronic communication. The president’s proposal and the House bill address only phone metadata.

“It’s important to keep in mind that today’s proposal is a step in the right direction,” Kaufman said, “but wouldn’t solve all the problems that need to be addressed.”

The EFF, meanwhile, threw its endorsement to the proposed USA FREEDOM Act from Judiciary chairmen Sen Patrick Leahy and Rep. Jim Sensenbrenner, saying that a new proposal is unnecessary.

“It’s a giant step forward and better than either approach floated today since it offers more comprehensive reform, although some changes are still needed,” Cohn and Jaycox said.

The new court order under which the NSA would operate, according to unnamed administration officials, would require phone companies to provide records in a compatible format, and it must include data about new calls placed or received after the order is granted, the New York Times article said. The government would also be able to seek records on calls up to two calls, or hops, away from a suspect’s number.

“The executive branch does not need congressional approval to stop the spying; nothing Congress has done compels it to engage in bulk collection.  It could simply issue a new Executive Order requiring the NSA to stop,” the EFF’s Cohn and Jaycox said.

Many of these reforms were laid out in the president’s January speech, which called for increased Executive branch oversight of the intelligence community’s dragnet surveillance activities. He ordered annual reviews by the Attorney General and Director of National Intelligence that would help declassify Foreign Intelligence Surveillance Court opinions that have broad privacy implications. Obama also called on Congress to establish a panel of privacy experts outside of government to render opinions on significant cases before the FISC hears them.

The many reforms the president proposed came from a review board’s recommendations in December. The board recommended to the president that metadata be left with the telecommunications providers who already store it for business purposes, or that it be handed over to an independent third party. It also recommended at the time that the NSA director job be Senate-confirmed and a civilian. That was shot down, however, when Obama announced that the NSA director would continue to be the head of U.S. Cyber Command, a military position.

Word Zero Day Attacks Use Complex Chain of Exploits

Tue, 03/25/2014 - 11:05

The exploit that attackers are using to target a zero day vulnerability in Microsoft Word relies on a complex series of pieces, including an ASLR bypass, ROP techniques and shellcode with several layers of tools designed to detect and defeat analysis. Microsoft officials said the exploit is being used in targeted attacks right now and attackers are employing it to drop a backdoor on vulnerable machines.

The vulnerability, which Microsoft acknowledged yesterday in a security advisory, affects several versions of Word and Office, both on Windows and OS X, and is related to a problem in the handling of RTF files. Microsoft also acknowledged that there is a theoretical method through which an attacker could trigger the vulnerability in Outlook, but that method hasn’t been seen in the wild yet.

The targeted attacks that have been identified thus far are going after Word 2010, and Microsoft officials said that the exploit doesn’t seem to work against Word 2013, which has ASLR enforcement enabled. Rather, the exploit will simply crash the application. But on vulnerable machines, the exploit works well.

“The attack detected in the wild is limited and very targeted in nature. The malicious document is designed to trigger a memory corruption vulnerability in the RTF parsing code. The attacker embedded a secondary component in order to bypass ASLR, and leveraged return-oriented-programming techniques using native RTF encoding schemes to craft ROP gadgets,” Chengyun Chu and Elia Florio of the MSRC engineering team wrote in a blog post analyzing the exploit.

“When the memory corruption vulnerability is triggered, the exploit gains initial code execution and in order to bypass DEP and ASLR, it tries to execute the ROP chain that allocates a large chunk of executable memory and transfers the control to the first piece of the shellcode (egghunter). This code then searches for the main shellcode placed at the end of the RTF document to execute it.”

The shellcode itself has a number of components designed to detect whether it’s being run in an environment where it’s being analyzed. Many malware authors have employed this technique for several years. The shellcode used in the Word attack has several layers of encryption and also checks for debugging flags and indicators that the code is running in a sandbox. The shellcode also has a feature that looks at the patch level of the compromised machine to determine when the last update was installed.

“The shellcode will not perform any additional malicious action if there are updates installed after April, 8 2014. This means that even after a successful exploitation with reliable code execution, after this date the shellcode may decide to not drop the secondary backdoor payload and simply abort the execution. When the activation logic detects the correct condition to trigger, the exploit drops in the temporary folder a backdoor file named ‘svchost.exe’ and runs it. The dropped backdoor is a generic malware written in Visual Basic 6 which communicates over HTTPS and relies on execution of multiple windows scripts via WScript.Shell and it can install/run additional MSI components,” the Microsoft researchers said.

Microsoft has released a list of indicators of compromise by the backdoor this attack is dropping, one of which is that the malware communicates over SSL with a command and control server that presents a self-signed certificate.

Image from Flickr photos of Al Shep.

Hootsuite Back Online Following Denial of Service Attack

Mon, 03/24/2014 - 15:22

Social media management system Hootsuite recovered rapidly from a denial of service (DoS) attack late last week, bouncing back after being offline for a few hours Thursday morning.

During that time, starting around 9:45 a.m. EST., users of the site were unable to use the service after a malicious actor flooded its services and brought its dashboard and mobile APIs offline.

The company’s CEO Ryan Holmes, insisted no customer data was compromised in the attack in a short blog post and claims that the company was quick to respond and in the midst of working on a solution to prevent future attacks.

“HootSuite Engineering and Security teams were able to respond immediately, and are working with hosting providers to mitigate the impact of any future attacks.”

Similar to Tweetdeck, Hootsuite is a web-based dashboard client used by social media professionals to manage Twitter.

Several thousand of the service’s users had their accounts briefly hacked last fall to send out a barrage of pharmaceutical phishing spam. While Hootsuite wasn’t compromised per se, the company did blame the spamming on weak passwords and at the time acknowledged that a “small number of successful attempts to log in to HootSuite were made using user IDs and passwords that were acquired elsewhere.”

The service implemented a handful of security methods last summer to prevent further password theft including Social Verification — verification using Twitter or Facebook log-in credentials associated with HootSuite accounts and Location Verification – verification if the account logs in from an unusual location.

Twitter mandated earlier this year that companies such as Hootsuite using the service’s application programming interface (API) only accept traffic traveling via Transport Layer Security (TLS) or Secure Sockets Layer (SSL). The move was largely done to harden user security for those who use third-party apps by encrypting sensitive information via HTTPS.

Targeted Attacks Exploit Microsoft Word Zero Day

Mon, 03/24/2014 - 15:20

Targeted attacks have been spotted against a zero-day vulnerability in Microsoft Word 2010, leading Microsoft to issue a special security advisory and produce a Fix-it solution for users until a patch is ready.

Microsoft also said that its Enhanced Mitigation Experience Toolkit (EMET) is a temporary mitigation for the zero-day. Some versions of EMET would have to be configured to work with Microsoft Office in order to ward off exploits; EMET 4.1 is already configured for Office, for example.

While attacks are currently targeting Microsoft Word 2010, Microsoft said the vulnerability affects Word 2003, 2007, 2013 and 2013RT, as well as Office for Mac, Office Web Apps 2010 and 2013, and Word Viewer.

An attacker could exploit the vulnerability with a malicious Rich Text Format file or email in Outlook configured to use Microsoft Word as the email viewer, said Dustin Childs, a Trustworthy Computing group manager at Microsoft.

The vulnerability can also be exploited over the Web where an attacker could host a website containing a malicious RTF exploit, or upload a malicious RTF exploit onto a site that accepts user-provided content. Victims would have to be enticed into opening the content; an exploit cannot be triggered without user interaction.

The Fix it disables opening of RTF content in Word, Microsoft said.

“The issue is caused when Microsoft Word parses specially crafted RTF-formatted data causing system memory to become corrupted in such a way that an attacker could execute arbitrary code,” Microsoft said in its advisory, adding that Word is by default the email reader in Outlook 2007, 2010 and 2013.

Microsoft said it could release an out-of-band patch, but more likely it will wait until its next Patch Tuesday security updates are released on April 8. That date also signals the end of support for Windows XP, Microsoft announced some time ago.

Microsoft has made it a common practice to release Fix it mitigations or recommend the use of EMET as a temporary stopgap while zero-day vulnerabilities are being actively exploited in the wild. The last one issued was in February for a string of attacks against a zero day in Internet Explorer.

The vulnerability in IE 10 was exploited by two different hacker groups against government and aerospace targets in the U.S. and France respectively. The same use-after-free vulnerability was present in IE 9 but was not being exploited.

EMET has also been a popular mitigation recommendation from Microsoft against memory-based vulnerabilities. The toolkit contains a dozen mitigations that fend off buffer overflow attacks and others that allow attackers to execute code on vulnerable machines.

Most recently, Microsoft released a technical preview of EMET 5.0 that included two new exploit mitigations. Researchers, however, have been finding moderate success in developing bypasses for some of the protections bundled in with EMET.

Advocates Seek ‘Smart Regulation’ of Surveillance Technology

Mon, 03/24/2014 - 15:18

The long shadow cast by the use of surveillance technology and so-called lawful intercept tools has spread across much of the globe and has sparked a renewed push in some quarters for restrictions on the export of these systems. Politicians and policy analysts, discussing the issue in a panel Monday, said that there is room for sensible regulation without repeating the mistakes of the Crypto Wars of the 1990s.

The last couple of years have seen a major uptick in the use of surveillance technology by governments around the world. Researchers have found government agencies in many countries, including Egypt, Syria Iran and others, using surveillance technology to identify and track dissidents, journalists and activists.

Some politicians, especially in Europe, and privacy advocates have been calling for some regulation of the sale of such technologies, and on Monday Marietje Schaake, a Dutch member of the European Parliament who has publicly supported such regulations in the past, said she believes there’s a clear need for some sort of control of the export of surveillance technology and that it can be done without restricting commerce.

“There’s virtually no accountability or transparency, while he technologies are getting faster, smaller and cheaper,” she said during a panel discussion put on by the New America Foundation. “We’re often accused of over-regulating everything, so it’s ironic that there’s no regulation here. And the reason is that the member states [of the EU] are major players in this. The incentives to regulate are hampered by the incentives to purchase.

“There has been a lot of skepticism about how to regulate and it’s very difficult to get it right. There are traumas from the Crypto Wars. Many of these companies are modern-day arms dealers. The status quo is unacceptable and criticizing every proposed regulation isn’t moving us forward.”

In the 1990s, there was a pitched battle between the United States government and the technology community over the regulation of encryption software and its classification as a munition, which restricted its export. One of the points of argument by the government was the possibility that intelligence agencies would lose much of their ability to spy on foreign governments and citizens if encryption software was widely deployed on the Internet. A proposal at the time from the government sought to solve this problem through the use of key escrow, which essentially would allow the government to hold a copy of the private key for any user, giving it the ability to decrypt communications if there was a legal need.

The scars from that period of time have taken a long time to heal in the security community, and the resistance to export controls on other types of software can be traced to the Crypto Wars in some cases. But others see it as a road full of potential potholes and fear it could lead to broader regulation of the Internet as a whole. The panelists on Monday said they understood those concerns but saw control of surveillance technology as a way to protect, rather than hinder, online freedoms.

“We look at the Internet as a medium for self-expression and it needs to be protected,” said Arvind Ganesan, director of the Business and Human Rights Division at Human Rights Watch. “Not everyone is going to respect the rule of law or people’s rights when they use these technologies. There are a number of companies out there selling these technologies in an unregulated fashion.”

Schaake said that while some companies make no bones about what their products are designed to do, others have found themselves in situations where a customer was using the technology for something they hadn’t anticipated, leading to an unusual situation.

“We’ve been asked by companies to apply sanctions so they could breach contract without being liable,” she said.

While the discussion of export controls for surveillance technology is a relatively new one, Schaake said she hopes that it will continue to evolve and eventually lead to an international agreement on how such tools should be sold.

“We’re not talking about over-regulating the Internet for the sake of regulating the Internet. What I hope will happen is that gradually a set of open norms will emerge. Right now, we’re not even close to knowing what norms we’re looking at. You have to start somewhere, and of course you will lose some market to someone, but we have real policies that are being compromised by these companies. That problem needs to be addressed. That’s why we’re seeking smart regulation to make that happen.”

Critics Upset as Microsoft Conducts Email Search in Leak Investigation

Mon, 03/24/2014 - 12:55

Late last week it emerged that Microsoft had searched through the contents of a French blogger’s Hotmail account in order to track down the source of a leak of proprietary information from the Redmond, Wash., tech giant.

The Electronic Frontier Foundation and transparency advocates have expressed stark disapproval of the entire situation. The EFF is even suggesting that Microsoft’s actions here constitute a direct violation of the Electronic Communications Privacy Act (ECPA).

The saga began when a Microsoft employee named Alex Kibkalo allegedly stole protected information pertaining to Microsoft’s Activation Server Software Developer’s Kit (SDK) and emailed it – via Hotmail, which is owned and operated by Microsoft – to a French blogger.

Around August 2012, Microsoft became aware that someone had leaked the SDK after the blogger in question – who is not named in the criminal complaint filed against Kibkalo in September 2012 – began posting screenshots of unreleased Windows operating system features. Microsoft’s Trustworthy Computing Investigations (TWCI), the division of the company tasked with protecting it against both external and internal threats, launched an investigation accordingly.

In early September 2012, an unnamed person contacted former president of the Windows Division of Microsoft, Steven Sinofsky. This source had been contacted by the blogger in order to confirm that the code he had received was in fact proprietary Microsoft code. In an interview with the TWCI, the source indicated that the blogger had contacted the source via Hotmail.

According to the complaint (which was acquired by the Register), “After confirmation that the data was Microsoft’s proprietary trade secret, on September 7, 2012 Microsoft’s Office of Legal Compliance (OLC) approved content pulls of the blogger’s Hotmail account.”

Upon examining the contents of the blogger’s email account, Microsoft found Kibkalo’s correspondence with the blogger. The company then provided all of this information to the FBI, who then arrested Kibkalo and charged him with the theft of trade secrets.

Microsoft published a response to the emergence of these facts, noting that it would make certain changes to its policies, but ultimately defending its right to search the contents of its users’ communication without legal oversight.

“Courts do not, however, issue orders authorizing someone to search themselves, since obviously no such order is needed,” wrote John Frank, deputy general counsel and vice president of legal and corporate affairs. “So even when we believe we have probable cause, there’s not an applicable court process for an investigation such as this one relating to the information stored on servers located on our own premises.”

Frank goes on to claim that the company acted within its terms of service by conducting “a limited review of this third party’s Microsoft operated accounts,” which the company only undertakes in “the most exceptional circumstances” after “[applying] a rigorous process before reviewing such content.”

Frank also notes the company’s understanding of public concern regarding their actions, and thus, the company says it will adhere to the following policies moving forward:

  • Microsoft will not conduct a search of customer email and other services unless the circumstances would justify a court order, if one were available.
  • To ensure compliance with the standards applicable to obtaining a court order, Microsoft will rely in the first instance on a legal team separate from the internal investigating team to assess the evidence. It will move forward only if that team concludes there is evidence of a crime that would be sufficient to justify a court order, if one were applicable. As a new and additional step, the company will then submit this evidence to an outside attorney who is a former federal judge. It will conduct such a search only if this former judge similarly concludes that there is evidence sufficient for a court order.
  • Even when such a search takes place, it is important that it be confined to the matter under investigation and not search for other information. Microsoft says it will continue to ensure that the search itself is conducted in a proper manner, with supervision by counsel for this purpose.
  • Finally, the company believes it is appropriate to ensure transparency of these types of searches, just as it is for searches that are conducted in response to governmental or court orders. The company therefore will publish as part of its bi-annual transparency report the data on the number of these searches that have been conducted and the number of customer accounts that have been affected.

“Unfortunately, this new policy just doubles down on the Microsoft’s indefensible and tone-deaf actions in the Kibkalo case,” says EFF legal fellow, Andrew Crocker. “It begins with a false premise that courts do not issue orders in these circumstances because Microsoft was searching ‘itself,’ rather than the contents of its user’s email on servers it controlled.”

Had the company believed it had probable cause to search one of its users’ Hotmail accounts, Crocker continues, Microsoft could have easily presented its case to the FBI and acquired a proper search warrant.

“To be sure, the process described in Microsoft’s statement bears more than a passing resemblance to a standard criminal investigation, with a prosecutorial team building a case and then presenting it to an ostensibly neutral third party, a retired federal judge no less,” Crocker writes. “Let’s call it Warrants for Windows!”

Crocker admits that while this search may have revealed criminal activity, it was also conducted in Microsoft’s own self-interest, and, therefore, sets an extremely dangerous precedent.

Time Warner Reports Fewer Than 250 National Security Orders

Mon, 03/24/2014 - 11:51

Time Warner Cable has joined a half-dozen telecommunications and technology companies that, in the past six months, have published their first transparency report on government and law enforcement requests for user data and content.

Since the Edward Snowden leaks began last June, transparency reports have become a provider’s best vehicle for communicating with customers and the industry about how much data they are legally compelled to share with the government and authorities. Companies have also gone to great lengths to increase the degree of transparency they can have to dispel any perception that a provider might be complicit in any intelligence agency surveillance activity.

For example, earlier this year, the U.S. Department of Justice relaxed the reporting rules for companies, allowing them more flexibility in sharing how many National Security Letters and requests made for customer data and content under the Foreign Intelligence Surveillance Act (FISA).

Time Warner’s transparency report, released on Friday, revealed that the giant cable and Internet provider received between 0 and 249 national security-related orders on the same range of customer accounts between January 2013 and June 2013. Organizations are not permitted to release the exact number of National Security Letters and FISA orders they receive, but may report them in ranges of 250. Prior to January, companies could report only in buckets of 0-999, leading companies such as Google, Facebook, Twitter, Yahoo and Microsoft to sue the government looking for more transparency. The January decision came in exchange for the companies’ withdrawal of their respective legal action.

“We believe helping our customers understand how often information about our customers is being requested is important,” Time Warner said in a statement Friday.

The company’s transparency report covered the entirety of 2013; Time Warner filed in the neighborhood of 12,000 requests last year, affecting close to 16,000 customer accounts. The vast majority of requests came via subpoenas (82 percent). Court orders made up 12 percent of requests, and search warrants 4 percent. The remainder of the requests were emergency requests, pen register/trap and trace orders, and wiretap orders.

Time Warner reported that it disclosed user content in 3 percent of requests. Subscriber information, also known as non-content requests, was provided in 77 percent of requests while no data was disclosed in 20 percent of requests. Non-content requests are limited to a customer’s name, address, phone number and IP address, Time Warner said. The company said it provides “meaningful notice” to customers if their information is requested by the government unless explicitly told not to.

“Time Warner Cable carefully reviews each order to ensure that it is a lawful request,” the company said. “If there is any question about the validity or scope of a request, we challenge it in the same manner we challenge all demands for customer information.”

WhiteHat Releases Aviator Browser for Windows

Mon, 03/24/2014 - 10:37

The privacy and anonymity of users’ online communications has been at the forefront of many discussions in the tech community and the general public in the last year as more and more information has leaked out about the NSA’s methods and how the agency collects vast amounts of user data. Keeping Web sessions private and secure can be a daunting task, especially for users who may not be so familiar with how to lock down their browsers, but WhiteHat Security is trying to make that process simpler with the release of a beta version of its Aviator browser for Windows.

Aviator is built on the Chromium code base, like Google Chrome, and is designed with security, privacy and anonymity in mind from the beginning. The browser, by default, doesn’t allow any tracking of users’ movements on the Web and WhiteHat doesn’t have any partnerships with advertisers or tracking companies. It also has DuckDuckGo set as the default search engine, a major change from most other browsers, which typically have Google or Bing as the default. DuckDuckGo doesn’t save any search history data from users or perform any tracking.

The disconnection from ad networks is a big part of the security and privacy model for Aviator. The browser doesn’t simply block ads, the way that many browser extensions do. Instead, the browser doesn’t make any connections to ad networks at all, which stops a large part of the tracking done on Web pages and also prevents potentially malicious ads from running. This difference also makes the browser faster than many of the other major browsers.

“We’re going to do some tests to see exactly what the difference is, but it doesn’t make all of those outbound connection requests so you can tell how much faster it is when you use it,” said Robert Hansen, director of product management at WhiteHat.

WhiteHat released a Mac OS X version of Aviator in October, and it has since been downloaded tens of thousands of times, company officials said. But users immediately began asking for a Windows version, along with Android and other platforms. Aviator was developed as an internal project at WhiteHat for employee use, and eventually the company made the decision to release it to the general public. Because the browser doesn’t include ads or partnerships with ad companies, WhiteHat is considering different revenue models for the browser.

“Therefore, some of our efforts will also be directed towards determining how to sell this in a way that does not involve profiting from our users’ information as many other browsers are in the unfortunate business of doing. As the saying goes, ‘if you aren’t paying for it, you’re the product’,” Hansen wrote in a blog post.

“That said, we want to make sure that all of our existing users of WhiteHat Aviator know that they will continue to get the browser for free, forever.”

Image form Flickr photos of Missyleone.

Attackers Picking Off Websites Running 7-Year-Old Unsupported Versions of Linux

Fri, 03/21/2014 - 15:19

The risks presented by unsupported operating systems are being called out in a large-scale attack on hundreds of websites.

Hackers have hit web servers running a version of the Linux 2.6 kernel released seven years ago. The result is a multistage attack where compromised websites are spiked with JavaScript that redirects users to a second site where additional malware is served.

“It is possible that attackers have identified a vulnerability on the platform and have been able to take advantage of the fact that these are older systems that may not be continuously patched by administrators,” said Martin Lee, a researcher with Cisco, who wrote about the compromises.

The second malicious site in this attack, Lee said, is serving up a click fraud scam where the victim’s browser displays a number of ads. He also suspects the attackers are loading a Trojan on compromised machines at this point as well.

The attack ramped up Monday and Tuesday of this week, Cisco said, noting that 400 distinct hosts were infected on each day and more than 2,700 URLs have been used in the attack, some of them legitimate websites that have fallen victim. Most of the web servers hit in this campaign were in the United States, Germany and Spain.

“This large scale compromise of an aging operating system highlights the risks posed by leaving such systems in operation. Systems that are unmaintained or unsupported are no longer patched with security updates,” Lee said. “When attackers discover a vulnerability in the system, they can exploit it at their whim without fear that it will be remedied.”

Lee also points out there are similarities to this attack and some used by the defunct Blackhole exploit kit, but it’s unlikely these are Blackhole compromises. Instead, he said, they could be part of a Mesh Network attack described by Sucuri in January.

Coincidentally, Cisco’s report comes a few days after research published by Imperva about exploits surfacing a few months ago for a two-year-old PHP vulnerability. Close to 20 percent of sites on the web are vulnerable to the bug in PHP versions 5.4.x, 5.3.x before 5.4.2 or 5.3.12.

“Not only are we seeing a vulnerability used after it was released so long ago, but what we’re seeing is attackers and professional hackers understanding what vendors understand—people just don’t patch,” Imperva director of security research Barry Shteiman said. “They can’t or won’t or are not minded to fix these problems.”

PHP is found on nearly 82 percent of websites today; these attacks target sites where PHP is running with CGI as an option, creating a condition that allows for code execution from the outside. Shteiman said the vulnerability affects a built-in mechanism in PHP that protects itself from exposing files and commands. A configuration flaw allows hackers to first disable the security mechanism, which in turn allows a hacker to run remote code or arbitrarily inject code.

These two attack campaigns should put system administrators on notice about inventorying unsupported operating systems and bringing patch levels up to par.

“Large numbers of vulnerable unpatched systems on the internet are tempting targets for attackers. Such systems can be used as disposable one-shot platforms for launching attacks,” Cisco’s Lee said. “This makes it all the more important that aging systems are properly maintained and protected.”