UPDATE — Some versions of Philips’ internet-enabled SmartTVs are vulnerable to cookie theft and a mélange of other tricks that abuse a lax WiFi setting.
The problem lies in Miracast, a WiFi feature that comes enabled by default, with a fixed password, no PIN, and no request of permission, according to researchers at the Malta-based firm ReVuln.
The vulnerability allows anyone within range of the device’s WiFi adapter to connect to the TV and access its many features. This includes being able to access potentially sensitive information within the TV’s system and configuration files as well as any files that may be on a USB stick connected to the TV. If the user browses the Internet on the same TV, an attacker could also be able to glean some of the cookies used to access certain websites.
The WiFi hole could also open the TV up to a whole mess of hijinks: An attacker could broadcast their own video, audio or images to the TV, and change the channel on a whim, without the viewer being any the wiser.
A video posted by ReVuln’s Luigi Auriemma on Wednesday points out that the default settings are present in the TV’s most recent firmware update, QF2EU-0.173.46.0, which allows anyone to connect to the device’s WiFi without authorization and without asking permission. The device’s hardcoded password is just ‘Miracast,’ and after users are connected they are not given the option to set a custom password.
In the proof of concept video Auriemma goes on to steal files from a USB device that’s plugged into the device, along with Gmail cookie files stored in the web browser.
According to ReVuln the vulnerabilities exist in all 2013 models of SmartTV (6, 7, 8, 9xxx) that have the most recent firmware installed.
The WiFi Alliance, a consortium in charge of overseeing all things WiFi said later Friday that it was looking into the vulnerability and have been in touch with Philips regarding the security of Miracast.
“The recent report of a non-compliant passphrase implementation appears to be limited to a single vendor’s implementation,” a statement from the Alliance read Friday. “We enforce the requirements of our certification programs and have been in contact with the company in question to ensure that any device bearing the Miracast mark meets our requirements.”
The vulnerability is the latest in the line of “internet of things” instabilities, software flaws that plague everyday items that connected to the internet such as vehicles, light bulbs and medical devices.
The researchers at ReVuln found a flaw similar to the SmartTV bug in Samsung’s LED 3D TV last year where in an attacker could exploit a vulnerability to retrieve personal information from the device, spy on users and root the TV remotely.
The makers of two major mobile apps, Fandango and Credit Karma, have settled with the Federal Trade Commission after the commission charged that they deliberately misrepresented the security of their apps and failed to validate SSL certificates. The apps promised users that their data was being sent over secure SSL connections, but the apps had disabled the validation process.
The settlements with the FTC don’t include any monetary penalties, but both companies have been ordered to submit to independent security audits every other year for the next 20 years and to put together comprehensive security programs.
“Consumers are increasingly using mobile apps for sensitive transactions. Yet research suggests that many companies, like Fandango and Credit Karma, have failed to properly implement SSL encryption,” said FTC Chairwoman Edith Ramirez. “Our cases against Fandango and Credit Karma should remind app developers of the need to make data security central to how they design their apps.”
The FTC complaint against Fandango alleges that the Fandango Movies app on iOS, which enables users to buy movie tickets, included an assertion during checkout telling users that their sensitive information was being sent over a secure connection. However, the app didn’t validate those connections, so users’ financial information was exposed during transmission.
“Before March 2013, Fandango did not test the Fandango Movies application to ensure that the application was validating SSL certificates and securely transmitting consumers’ sensitive personal information. Although Fandango commissioned limited security audits of its applications starting in 2011, more than two years after the release of its iOS application, respondent limited the scope of these security audits to issues presented when the ‘code is decompiled or disassembled,’ i.e., threats arising only from attackers who had physical access to a device. As a result, these audits did not assess whether the iOS application’s transmission of information, including credit card information, was secure,” the FTC complaint says.
The FTC also said that Fandango didn’t have a good process for responding to vulnerability reports from security researchers, leading to the company missing an advisory from a researcher who had discovered the SSL vulnerability.
“In December 2012, a security researcher informed respondent through its Customer Service web form that its iOS application was vulnerable to man-in-the-middle attacks because it did not validate SSL certificates. Because the security researcher’s message included the term “password,” Fandango’s Customer Service system flagged the message as a password reset request and replied with an automated message providing the researcher with instructions on how to reset passwords. Fandango’s Customer Service system then marked the security researcher’s message as “resolved,” and did not escalate it for further review,” the complaint says.
The problems with the Credit Karma app were similar, as it did not validate SSL certificates during supposedly secure connection attempts. The FTC alleges in its complaint that the company failed to validate SSL certificates on both its iOS and Android apps.
“During the iOS application’s development, Credit Karma had authorized its service provider, the application development firm, to use code that disabled SSL certificate validation ‘in testing only,’ but failed to ensure this code’s removal from the production version of the application. As a result, the iOS application shipped to consumers with the SSL certificate validation vulnerability. Credit Karma could have identified and prevented this vulnerability by performing an adequate security review prior to the iOS application’s launch,” the complaint says.
“In February 2013, one month after addressing the vulnerability in its iOS application, Credit Karma launched the Android version of its application, again without first performing an adequate security review or at least testing the application for previously identified vulnerabilities. As a result, like the iOS application before it, the Android application failed to validate SSL certificates, overriding the defaults provided by the Android APIs.”
The FTC’s complaint against Credit Karma also alleges that the app was storing users’ authentication tokens and passcodes in the clear on users’ devices.
Image from Flickr photos of Erik Drost.
Cisco this week patched a handful of denial-of-service vulnerabilities in its IOS software. The security updates are part of a biannual release from Cisco; the next one is due in September.
Five of the six patches handle denial-of-service vulnerabilities in its flagship IOS used in most of its routers and network switches. The sixth patch also repairs a DoS bug, but in its Cisco 7600 Series Route Switch Processor 720 with 10 Gb Ethernet uplinks.
Successful exploits of these bugs could not only crash the networking gear, but also force reboots, Cisco said.
Perhaps the most severe vulnerabilities addressed by Cisco are in IOS’ implementation of network address translation (NAT). The update patched two vulnerabilities that an attacker could use to remotely crash networking gear running IOS. Cisco said the vulnerability is in the Application Layer Gateway module in IOS.
“The vulnerability is due to the way certain malformed DNS packets are processed on an affected device when those packets undergo Network Address Translation (NAT). An attacker could exploit this vulnerability by sending malformed DNS packets to be processed and translated by an affected device,” Cisco said in its advisory. “An exploit could allow the attacker to cause a reload of the affected device that would lead to a DoS condition.”
The second NAT vulnerability is in the TCP Input module that could allow a remote attacker to cause a memory leak or reboot of the flawed device.
“The vulnerability is due to the way certain sequences of TCP packets are processed on an affected device when those packets undergo Network Address Translation (NAT). An attacker could exploit this vulnerability by sending a specific sequence of TCP packets to be processed by an affected device,” Cisco said. “An exploit could allow the attacker to cause a memory leak or reload of the affected device that would lead to a DoS condition.”
Cisco also patched a DoS bug in the IOS SSL VPN subsystem, which fails to process certain HTTP requests. An attacker can send the VPN malicious requests that would consume memory causing it to crash.
“A three-way TCP handshake must be completed for each malicious connection to an affected device; however, authentication is not required,” Cisco said. “The default TCP port number for SSLVPN is 443.”
Cisco also updated the IPv6 protocol stack in IOS and IOS XE to address a vulnerability that could lead to memory consumption. An attacker would need to send a malformed IPv6 request to exploit the bug.
“The vulnerability is due to incorrect processing of crafted IPv6 packets. An attacker could exploit this vulnerability by sending specially crafted IPv6 packets to the affected device,” Cisco said. “An exploit could allow the attacker to trigger I/O memory depletion, causing device instability and could cause a device to reload.”
IOS and IOS XE were also vulnerable to an exploit of a DoS bug in their Internet Key Exchange version 2 module. An IOS device improperly processes malformed IKEv2 packets, enabling an attacker to exploit the bug by sending malformed packets to the device causing it to crash.
The final IOS vulnerability was found in the Session Initiation Protocol implementation of the operating system. A remote attacker could cause IOS to reboot by sending a malicious SIP message if it configured to process SIP messages.
“The vulnerability is due to incorrect processing of specific SIP messages. An attacker could exploit this vulnerability by sending specific SIP messages, which may be considered well-formed or crafted to the SIP gateway,” Cisco said. “An exploit could allow the attacker to trigger a device reload.”
Finally, the patch for the Cisco 7600 Series processor vulnerability addresses a security issue with the Kailash field-programmable gate array (FPGA) versions prior to 2.6, Cisco said.
“An attacker could exploit this vulnerability by sending crafted IP packets to or through the affected device,” Cisco said. “An exploit could allow the attacker to cause the route processor to no longer forward traffic or reboot.”
A new email phishing scam is making use of a realistic-looking Apple login page in order to pilfer Apple ID usernames and passwords before moving on to steal user credit card information.
The malicious domain that the attackers are using here is appleidconfirm[dot]net.
It’s not clear whether the attackers have found a way to distinguish legitimate Apple ID email addresses from a non-existent one. However, once the victim has entered what is considered valid credentials, that person is redirected to another part of the malicious domain (ending in /?2).
On this second page, users are presented with a convincing replica of the actual Apple website. The page requests various pieces of personal information, such as full names, dates of birth, billing addresses, and phone numbers. When and if a victim enters that information and clicks the “verify” button, a window then pops up asking for the user’s payment card information.
If the victim decides to enter payment information, he or she will be redirected to the actual Apple website.
According to a technical analysis posted in the ICS Diary write-up, the site responsible for all this tomfoolery was registered just three days ago. The reason the attackers were able to mimic Apple’s interface so accurately is because they didn’t copy its HTML or CSS, but rather overlaid their website with screenshots. Of course, because of this method, the scam becomes a bit of a dead giveaway when and if a user attempts to follow any links on the masquerading site. Another dead giveaway, the post notes, is the lack of HTTPS, which Apple would deploy if it were asking users to provide sensitive information.
A similar scam emerged last week when attackers compromised a server belonging to EA Games and modified it to look like an Apple log-in page, which they then used in a phishing attack designed to steal Apple ID credentials.
The federal government is looking for a way to relax the laws to make it simpler for law enforcement agents to target and compromise the computers of suspects involved in criminal cases. The Department of Justice has forwarded a request to the body that considers such changes, asking that judges in one district be allowed to issue warrants for remote access operations in that district–or any other.
The change, first reported by the Wall Street Journal, would be a major one, allowing investigators to obtain warrants from a given judge to conduct remote access attacks against suspects’ machines in any other district in the United States. The government’s request also seeks the ability to obtain one warrant that would apply to several computers, as in a large-scale botnet investigation.
“The Department of Justice recommends an amendment to Rule 41 of the Federal Rules of Criminal Procedure to update the provisions relating to the territorial limits for searches of electronic storage media. The amendment would establish a court-supervised framework through which law enforcement can successfully investigate and prosecute sophisticated Internet crimes, by authorizing a court in a district where activities related to a crime have occurred to issue a warrant – to be executed via remote access – for electronic storage media and electronically stored information located within or outside that district,” Mythili Raman, acting assistant attorney general, wrote in a letter supporting the change.
“The proposed amendment would better enable law enforcement to investigate and prosecute botnets and crimes involving Internet anonymizing technologies, both which pose substantial threats to members of the public.”
In a document that lays out the government’s reasoning for the request, which will be considered in two weeks, the government gives a couple examples of the types of investigations that could benefit from this change. One of the examples is a warrant request in an investigation into a child pornography ring that was hosting a site as a Tor hidden service.
“The second example is based on a warrant used in an investigation of a child pornography website operating as a ‘hidden service’ on the Tor network. Tor masks its users’ actual IP addresses by routing their communications through a distributed network o f relay computers run by volunteers around the world. In this case, law enforcement knew the physical location of the server used to host the hidden service. However, without use of a NIT, investigators could not identify the administrators or users of the hidden service. This warrant would authorize the collection of IP addresses, MAC addresses, and other similar information from users and administrators o f the website,” Jonathan J. Wroblewski, director of Justice’s Office of Policy and Legislation, write in a letter to the chair of the subcommittee considering the rule change.
The letter also includes a sample affidavit in support of a warrant request that describes a “network investigative technique”–the government’s euphemism for hacking–that closely resembles a watering hole attack.
“I make this affidavit in support of an application under Rule 41 of the Federal Rules of Criminal Procedure for a warrant to use a network investigative technique (“NIT”) on computers that access Website A, identified by Tor URL example.onion (collectively, TARGET COMPUTERS), as further described in this affidavit and its attachments, in order to search the TARGET COMPUTERS for the information described in Attachment B,” the sample affidavit says.
The proposed change will be considered by the U.S. Judicial Conference April 7-8.
Schneider Electric, a leading provider of industrial control systems, recently patched a remotely exploitable vulnerability in a driver found in 11 of its products.
The Industrial Control Systems Computer Emergency Response Team (ICS-CERT) released an advisory yesterday alerting users to the availability of a patch and warning of the consequences associated with the stack-based buffer overflow vulnerability found in Schneider’s Serial Modbus Driver, ModbusDrv.exe.
The driver is started when a programmable logic controller is connected to the serial port on a server. It creates a listener on TCP port 27700, and when a connection is made the Modbus Application Header is read into a buffer, the ICS-CERT advisory said.
If the header is too large, a stack-based overflow results. The advisory cautions that a second overflow vulnerability is also exploitable by overwriting the return address. By doing so, an attacker could execute code remotely.
The vulnerable software driver is used across a gamut of industries, including chemicals, manufacturing, energy, nuclear reactors, government facilities, dams and transportation systems, primarily in the United States, Europe and China.
ICS-CERT said it is not aware of any public exploits. The patch is available from Schneider Electric.
ICS-CERT said the following Schneider products contain the vulnerable Modbus driver:
- TwidoSuite Versions 2.31.04 and earlier,
- PowerSuite Versions 2.6 and earlier,
- SoMove Versions 1.7 and earlier,
- SoMachine Versions 2.0, 3.0, 3.1, and 3.0 XS,
- Unity Pro Versions 7.0 and earlier,
- UnityLoader Versions 2.3 and earlier,
- Concept Versions 2.6 SR7 and earlier,
- ModbusCommDTM sl Versions 2.1.2 and earlier,
- PL7 Versions 4.5 SP5 and earlier,
- SFT2841 Versions 14, 13.1 and earlier, and
- OPC Factory Server Versions 3.50 and earlier.
“The affected products are mostly software-based utilities and engineering tools designed for programming and configuring process, machine, and general control applications,” the ICS-CERT advisory said. “These applications rely on a common driver to communicate with PLCs.”
This is the third time this year that ICS-CERT has issued an alert about vulnerabilities in Schneider Electric gear. In January, an advisory was sent out about a remotely exploitable resource consumption vulnerability that was patched in Schneider’s ClearSCADA software. ClearSCADA is secure remote management software designed for use in large, geographically dispersed critical infrastructure systems.
In March, the company patched vulnerabilities in Schneider OPC Factory Server, which is an interface for client applications that require access to production data in real time. The buffer overflow flaws were not remotely exploitable, yet could allow an attacker with local access to run malicious programs on a computer running the vulnerable server software.
The White House today unveiled a five-point plan to end the National Security Agency’s bulk collection of phone call metadata, preserving what it says is a balance between the intelligence community’s national security needs and the public’s desire to maintain its privacy.
The proposal ends the government’s collection of phone records under Section 215 of the PATRIOT Act as it exists today, keeping that data with telecommunications providers who will store those records for 18 months as they are currently federally mandated to do.
The government would have access to the records only under approval from the secret Foreign Intelligence Surveillance Court (FISC), which must approve the querying of a suspect phone number and only after judicial approval based on a national security concern.
Currently, the NSA collects and stores call metadata, and maps connections between numbers belonging to individuals suspected of terrorism or threatening national security. As the Snowden leaks began last June, the depths of NSA surveillance, including dragnet capturing of all Americans’ phone calls without warrants, drew the ire of civil libertarians, mainstream media and politicians on both sides of the aisle.
The new plan was ordered by President Obama during a Jan. 17 address to the nation on surveillance. During that speech, he ordered the Attorney General and the intelligence community to work together on an adequate solution that would alter the collection of data under Section 215. Obama imposed a March 28 deadline for the proposal, the day FISC is expected to renew the NSA program for another 90-day cycle, the final time it will do so.
The White House proposal, hints of which were released two days ago in a New York Times report, also changes the number of hops the government will be able to collect between suspects from three to two. While apparently a concession, ACLU National Security advisor and attorney Brett Max Kaufman told Threatpost this remains a red flag for privacy advocates.
“It’s unclear, if the government is able to satisfy FISC’s standard of a reasonable, articulable suspicion, why anyone connected to that person would also satisfy that same standard to get their call records?” Kaufman said.
The president’s proposal was a bit more stringent than a similar House Intelligence Committee bill that was introduced on Tuesday, which did not require prior judicial approval; a judge would rule on a request only after the FBI submits it to a provider.
Verizon general counsel Randal Milch said the provider supports the efforts to end bulk collection.
“At this early point in the process, we propose this basic principle that should guide the effort: the reformed collection process should not require companies to store data for longer than, or in formats that differ from, what they already do for business purposes,” Milch said. “If Verizon receives a valid request for business records, we will respond in a timely way, but companies should not be required to create, analyze or retain records for reasons other than business purposes.”
The final two provisions of today’s official proposal say the court-approved numbers can only be used for a limited period of time without again requiring approval from FISC. “The production of records would be ongoing and prospective,” the proposal said.
Also, under court order, the phone companies would be required to provide technical assistance to ensure the records can be accessed in a timely fashion and in an accessible format.
The White House plan would need to be ratified by Congress in order to go into effect, and because of this, the Department of Justice will seek another 90-day renewal from FISC for the program, much to the chagrin of experts.
“EPIC is encouraged by the President’s continued commitment to end the bulk collection program … however, the renewal of the FISC order on Friday would be a disappointing development,” said Alan Butler, appellate advocacy counsel for the Electronic Privacy Information Center (EPIC). “The bulk collection program will not end until the FISC order expires without the President seeking its renewal.”
Researchers are in the midst of rolling out a secure new platform for building web applications that can protect confidential data from being stolen in the event attackers gain full access to servers.
The platform, Mylar, is the result of a project spearheaded by students at the Massachusetts Institute of Technology (M.I.T.) set to be discussed at USENIX’s Symposium on Networked Systems Design and Implementation conference next week in Seattle.
According to a paper – “Building web applications on top of encrypted data using Mylar” (.PDF) – , the platform can encrypt data on servers and decrypt it in users’ browsers, provided they have the correct key.
As it is, there are several ways in which data can be leaked from servers: Attackers could exploit a vulnerability and break in; a prying admin could overstep their bounds; or a server operator could be forced to disclose data by law.
While Mylar’s goal is to keep confidential data safe by preventing these incidents from happening, it does so by operating under the premise that the server where the data is stored has already been hacked.
“Mylar assumes that any part of the server can be compromised, either as a result of software vulnerabilities or because the server operator is untrustworthy, and protects data confidentiality in this setting,” according to the paper.
Raluca Ada Popa, the paper’s lead author and a Ph.D. Candidate at the school’s Department of Electrical Engineering and Computer Science, worked with six colleagues from the school’s Computer Science and Artificial Intelligence Laboratory (CSAIL) for nearly two years on the project.
The report makes light of recent privacy-minded applications such as Mega and Cryptocat, but acknowledges that while those apps allow users to decrypt information from servers via browsers with special keys, they still have their drawbacks.
Or as a description of the platform on M.I.T.’s website puts it, “simply encrypting each user’s data with a user key does not suffice.”
Mainly it’s because these apps don’t allow data sharing, they make keyword searches difficult and perhaps most concerning, they can still be tricked into letting the server extract user keys and data via malicious code.
To allow data sharing on Mylar, a special mechanism establishes the correctness of keys obtained from the server – backed up by X.509 certificate paths – to ensure that a server that has been compromised cannot trick the app into using a bogus key. This allows multiple users, with keys, to share the same item.
To verify app code, Mylar keeps application code and data separate, checking to make sure code it runs is properly signed by the website owner, something that in turn keeps HTML pages that are supplied by the server static.
While many schemes require document data be encrypted by a single key, this prevents easy keyword searches. A unique cryptographic scheme in Mylar allows clients to search through many documents with multiple encryption keys for keywords and without even learning what the word is or learning the contents of the documents, Mylar can return a list of instances of that word.
Mylar owes a lot to this specialized search scheme; something Ada Popa claims she discovered last May and helped get the ball rolling on the platform soon after.
Ada Popa and her team started working on the project in 2012 but it would take another year and a half to truly come to fruition. The researchers initially tried to build the framework over Django and Ruby on Rails before realizing the way the two platforms are designed made them incompatible with what they were looking for from a encryption and confidentiality standpoint.
In the summer of 2013, the group realized that the more secure Meteor, an emerging, open source web framework was their best option. Developers from Meteor helped the team test the software and it wasn’t long after until Ada Popa came up with the multikey search scheme, pieced together from elliptic curves, and they were off.
Three months later — a few design tweaks here and there — and Mylar was complete.
According to the paper, if adopted, the platform would require little effort by developers. The researchers ported six applications over to Mylar and only needed 36 additional lines of code on average, per app, to protect sensitive data.
The six apps that researchers have tested Mylar on so far consist of a website that lets endometriosis patients record their symptoms, a website for managing homework and grades, a chat application, a forum, a calendar and a photo sharing app.
It might not be long until Mylar catches on with some of those apps in real life.
Two of those apps, the medical app, and the website that lets professors at M.I.T. manage homework and grades; actually plan on implementing Mylar in the immediate future.
Endometriosis patients at Newton-Wellesley Hospital, a medical center in Newton, Mass., tested the medical app a month ago. According to Ada Popa, in another month or so it should be out of alpha deployment following approval from the Institutional Review Board (IRB). Since the app is transferring highly sensitive patient information, she wouldn’t be surprised if the review period took a little bit longer than usual however.
Professors in CSAIL’s Computer Systems Security classes have successfully used an app running on Mylar for managing student’s homework and grade information.
Still though, while the researchers stress that Mylar isn’t perfect, it does work providing users follow a modicum of responsibility when it comes to privacy and security.
While Mylar’s main goal is to protect data from being compromised in arbitrary server compromises, conventional wisdom assumes users are not running the framework on a compromised machine and sharing information with untrustworthy users. Mylar also assumes users are checking to make sure they use the HTTPS version of the site/app they’re using and can safely recognize phishing attacks.
While it sounds promising for PC usage, the platform could also have a future on Android systems. The researchers claim they’ve tested Mylar on phones running the Google operating system but left the results out of their paper for brevity sake.
“Mylar’s techniques for searching over encrypted data and for verifying keys are equally applicable to desktop and mobile phone applications; the primary difference is that code verification becomes simpler, since applications are explicitly installed by the user, instead of being downloaded at application start time,” according to the paper.
The team’s research was aided by a handful of firms including Google, the National Science Foundation, and DARPA’s Clean-Slate Design of Resilient, Adaptive, Secure Hosts (CRASH) program – a program dedicated to crafting cyber-attack resistant systems.
This is the latest piece of software designed by Ada Popa, who considers Mylar the follow up to CryptDB, a piece of software she devised in 2011 that more or less did the same thing that Mylar does, but for databases.
“We started working on this project as a natural next step after the previous project, CryptDB, which did the same for databases,” Ada Popa said, “We realized that web applications are an even more common use case for placing on a cloud or on a compromised server.”
CryptDB encrypted information and ran SQL queries without decrypting the database. Some of Ada Popa’s CryptDB research even found its way into a system Google released later that year,Encrypted BigQuery, that can run SQL-like queries against large, multi-terabyte datasets.
Ada Popa plans to present Mylar in USENIX’s Security and Privacy session next Wednesday and demonstrate the platform later that afternoon alongside one of the paper’s co-authors, Jonas Helfer.
There has been a steady but dramatic increase in the potency of distributed denial of service (DDoS) attacks from the beginning of 2013 through the first two months of this year. In large part, reason for this rise in volume has to do with the widespread adoption of two attack methods: large synchronization packet flood (SYN flood) attacks and network timing protocol (NTP) amplification attacks.
According to an Incapsula report tracking the DDoS threat landscape during this 14-month period of time, the largest such attacks in February 2013 were delivering traffic at a rate of four gigabytes per second (Gbps). By July 2013, 60 Gbps and larger DDoS attacks had become a weekly occurrence. In February of 2014, Incapsula reports having witnessed one NTP amplification attack peaking at 180 Gbps. Other reports have found the volume of NTP amplification attacks as high as 400 Gbps.
“As early as February 2013 we were able to track down a single source 4Gbps attacking server, which – if amplified – could alone have generated over 200Gbps in attack traffic,” the report claims. “With such available resources it is easy to explain the uptick in attack volume we saw over the course of the year.”
At present, large scale DDoS attacks, which Incapsula defines as those of 20 Gbps and more, account for more nearly one-third of all attacks. Attackers are able to achieve these high volumes by launching large SYN floods and DNS and NTP amplification attacks.
A new entrant to the DDoS landscape is a technique called “hit and run” DDoS attacks. These attacks first emerged in April 2013, and, according to Incapsula, target human-controlled DDoS protections by exploiting weaknesses in services that are supposed to be manually triggered, like generic routing encapsulation tunneling and domain name server re-routing.
Not only is each classification of DDoS attack becoming more potent, but 81 percent of attacks exploit multiple vectors.
“Multivector tactics increase the attacker’s chance of success by targeting several different networking or infrastructure resources,” Incapsula claims. “Combinations of different offensive techniques are also often used to create ‘smokescreen’ effects, where one attack is used to create noise, diverting attention from another attack vector.” Furthermore, multivector attacks can be used for trial and error style reconnaissance as well.
The most commonly deployed attacks are a combination of two types of SYN floods – one deploying regular SYN packets and another using large SYN (above 250 bytes) packets.
“In this scenario, both attacks are executed at the same time, with the regular SYN packets used to exhaust server resources (e.g., CPU) and large SYN packets used to cause network saturation,” they say. “Today SYN combo attacks account for ~75% of all large scale network DDoS events (attacks peaking above 20Gbps). Overall, large SYN attacks are also the single most commonly used attack vector, accounting for 26% of all network DDoS events.”
However, in February 2014, NTP amplification attacks surpassed all others as the most commonly seen form of DDoS. This may be the beginning of a new trend or merely a temporary spike, but as the report notes, it is too early to tell.
While the number of requests for user information that Google receives from governments around the world continues to rise–climbing by 120 percent in the last four years–the company is turning over some data in fewer cases as time goes on. Google received more than 27,000 requests for user information from global law enforcement agencies in the last six months of 2013 and provided some user data in 64 percent of those cases.
The new report from Google includes information on requests for user data from governments around the world, as well as new data on National Security Letters sent by the United States government to Google. In the second half of 2013, Google received between 0-999 NSLs, the same range it reported in all of the previous periods, going back to January 2009. However, those letters affected more users or accounts this time, between 1000-1999, up from 0-999 in the first six months of 2013.
The U.S. government only allows companies to report NSLs in ranges of 1,000. The Google transparency report also includes data on orders from the Foreign Intelligence Surveillance Court, but that information is subject to a six-month delay, so there is no data for June through December 2013. In the first six months of last year, Google received 0-999 content request and the same number of non-content requests.
As usual, the U.S. was the largest contributor to the volume of requests for user data that Google reported, sending 10,574 requests, covering 18,254 accounts. France was second, with 2,750 requests for information about 3,378 accounts. Germany, India, the U.K. and Brazil followed.
“Government requests for user information in criminal cases have increased by about 120 percent since we first began publishing these numbers in 2009. Though our number of users has grown throughout the time period, we’re also seeing more and more governments start to exercise their authority to make requests,” Richard Salgado, Legal Director, Law Enforcement and Information Security at Google, wrote in a blog post on the report.
“We consistently push back against overly broad requests for your personal information, but it’s also important for laws to explicitly protect you from government overreach. That’s why we’re working alongside eight other companies to push for surveillance reform, including more transparency. We’ve all been sharing best practices about how to report the requests we receive, and as a result our Transparency Report now includes governments that made less than 30 requests during a six-month reporting period, in addition to those that made 30+ requests.”
When Google first began reporting the percentage of user data requests that it complies with in some way in 2010, the company reported providing some information in 76 percent of cases. That number has decreased steadily in the years since, down to the 64 percent Google complied with in some way in the second half of 2013.
On its surface, the idea of turning a smartphone into a cryptocurrency mining machine sounds novel. But practical and profitable? Not so much.
That hasn’t stopped thieves from corrupting a number of popular Android applications for just that purpose, including two on the Google Play store called Songs and Prized; Songs has been downloaded a million times.
Several versions exist too of the CoinKrypt malware, said researchers at mobile security company Lookout. The malicious CoinKrypt apps, Lookout said, have been confined to forums in Spain and France that distribute pirated software.
CoinKrypt is an add-on to a legitimate app and hijacks an Android phone’s resources—which are limited for this purpose to begin with—in order to mine Litecoin, Dogecoin, and Casinocoin.
Desktop computers, for example, have much more resources that can be dedicated for this purpose than a mobile device, and yet are still insufficient to mine coins for profit.
People do mine coins, rather than buy them, using purpose-built software to do so. Essentially, people who mine are lending their machine’s processing power for the purpose, and in return are rewarded with a new coin.
Mining digital currency, however, does come with some gotchas, especially on a mobile device. Namely, mining can be a resource hog and will quickly drain battery life, overheat hardware causing damage, or can exhaust a user’s data plan by downloading a blockchain, or transaction history, which can be gigabytes in size.
Lookout experts said that CoinKrypt does not include a feature that is native to other mining software which controls the rate at which coins are mined in order to preserve the hardware from damage. This may also be why the attackers are staying away from mining Bitcoins, which despite being far more valuable, are much more difficult to mine.
“This leads us to believe this criminal is experimenting with malware that can take advantage of lower-hanging digital currency fruit that might yield more coins with less work,” said Marc Rogers, a researcher with Lookout. “With the price of a single Bitcoin at $650 and other newer currencies such as Litecoin approaching $20 for a single coin we are in the middle of a digital gold rush. CoinKrypt is the digital equivalent of a claim jumper.”
Rogers said it’s almost one million times easier to mine Litecoin than Bitcoin; 3.5 million times easier to mine Dogecoin.
“When we tested the feasibility of mining using a Nexus 4 by using Android mining software such as the application ‘AndLTC,’ we were only able to attain a rate of about 8Kh/s – or 8,000 hash calculations per second, the standard unit of measure for mining,” Rogers said. “Using a Litecoin calculator and the difficulty setting mentioned above we can see that this would net us 0.01 LTC after seven days non-stop mining. That’s almost 20 cents.”
Other samples, Rogers said, have been targeting newer digital coins in order to avoid these issues.
Researchers at G Data Software also found mining software embedded in a version of the TuneIn Radio Pro app on the Google Play store. The Trojan, dubbed MuchSad, mines Dogecoin in addition to serving streaming radio to the user.
“The malicious functionality is put on hold when the user of the smartphone or tablet is using it. When the malicious app is first launched, a service called ‘Google Service’ is initialized,” researchers at G Data said. “After five seconds, and thereafter every twenty minutes, this checks whether the user is actively using the device. If the device is free – not in use – the malicious app starts to ‘mine’ Dogecoins for the attacker.”
In three days, the attacker was able to mine nearly 1,900 Dogecoins, or about $6.
“The only clues that might quickly raise a user’s suspicions are the increased battery usage and the heat from the mobile phone, due to the constant high load at times when the user is not actively using the device. You can even see the battery consumption in the Android system logs,” G Data researchers said. “However, the ‘Google Service’ disguise will very probably come into play again here. Barely a single user will question such battery consumption, assuming it is a system process.”
When attackers broke into the network of the University of Maryland last month, the university’s wasn’t sure how to react. The organization had never had a major security incident before, and this one qualified as major: 310,000 Social Security numbers and other information was gone. And then three weeks later, it happened again.
Wallace Loh, the president of the University of Maryland, told the Senate Commerce Committee Wednesday that the university’s security and IT team was caught off guard when the attackers infiltrated the college’s network on Feb. 18. The attackers made their initial intrusion into the network by uploading a piece of malware to one of the university’s Web sites that is designed to allow users to upload photos. Once on the network, the attackers began to move laterally and eventually ended up finding the directory for the university’s IT management team and was able to change the passwords they found there.
The attackers, who had come in over the Tor network to hide their identity and location, then located a database that stored Social Security numbers of students, alumni and others, as well as university IDs, and downloaded 310,000 of them.
“It turns out, because we’ve never been hacked before, we were just flying by the seat of our pants,” Loh told the committee in his testimony.
Within 24 hours of discovering the breach, the university had disclosed the breach publicly, contacted credit-monitoring services and begun notifying the people who were affected by the breach. The university got in touch with the FBI, who came in to investigate the attack. Three weeks later, while the FBI was still digging through the details of the Feb. 18 breach, attackers again compromised Maryland’s network and had access to quite a bit of sensitive information, more than was at risk during the first attack, in fact. This time, however, the attackers simply posted one victim’s personal details to Reddit as a show of force before the FBI investigators were able to mitigate the attack.
In the wake of the first attack, Loh said that the university’s IT team had taken a number of steps to harden its network and ensure that the organization was no longer storing data it didn’t need.
“We have migrated almost all of our Web sites to the cloud,” he said. “What we have done immediately is purge almost all unnecessary data. We have purged approximately two hundred and twenty-five thousand names from our records. We have isolated sensitive information. And the cost is very, very high.”
That cost is one that many organizations around the country are feeling. Target, the victim of one of the larger breaches in history last year, is still feeling the repercussions from the attack, which affected more than 100 million people. John Mulligan, the vice president and CFO of Target, also spoke before the Commerce Committee Wednesday, and said that the company is going through many of the same machinations that Maryland did, including increasing segmentation on its networks. Mulligan also said that the company is expanding its use of two-factor authentication on its networks and will, by early next year, begin issuing and accepting chip-enabled credit cards.
The Target data breach and the attack on the University of Maryland illustrate a truism that many in the security industry have known for years.
“The people who play offense will always be one step ahead of those who play defense,” Loh said.
The Snowden leaks and the ensuing critical spotlight shone on the National Security Agency’s surveillance programs have nudged many technologists, privacy hounds and politicians away from their desks and onto the front lines calling for reforms.
Two nights ago, the New York Times reported that President Obama responded to those calls and would soon reveal a new legislative proposal that would end the agency’s bulk collection of phone call records. While shy on many important details, the move demonstrates that public debate still holds some sway with policy makers.
“The only [turning point] was disclosure of the program,” said Brett Max Kaufman, National Security Fellow and attorney with the American Civil Liberties Union. “Since that day, this has almost been inevitable because the claims the government made in secret to FISC [Foreign Intelligence Surveillance Court] and to Congress were never given a fair hearing from the other side. Once these programs became public and the government had to defend them in a court of law and in the court of public opinion, it was clear that these claims made in secret could not withstand arguments from civil libertarians and the public at large.”
While the president’s proposal carries the weight of the White House, it addresses only the NSA’s collection of phone metadata, and none of the other alleged surveillance activities made public by the Snowden documents. Other bills, such as the USA FREEDOM Act, extend beyond phone records to digital information collected through what’s known as the PRISM program under section 702 of the Foreign Intelligence Surveillance Act (FISA) and provide for enhanced oversight over intelligence gathering.
The Electronic Frontier Foundation, one of the most vocal advocacy groups opposing government surveillance of Americans, applauded the White House proposal yesterday, but endorsed the FREEDOM Act. The EFF called it “a giant step forward” and said it was a more favorable proposal than the president’s or another introduced by the House Intelligence Committee yesterday.
“Or better still, we urge the Administration to simply decide that it will stop misusing section 215 of the Patriot Act and section 702 of the FISA Amendments Act and Executive Order 12333 and whatever else it is secretly relying on to stop mass spying,” said EFF legal director Cindy Cohn and EFF legislative analyst Mark M. Jaycox. “The executive branch does not need congressional approval to stop the spying; nothing Congress has done compels it to engage in bulk collection. It could simply issue a new Executive Order requiring the NSA to stop.”
The president’s proposal would end the NSA’s collection and storage of phone data; those records would remain with the providers and the NSA would require judicial permission under a new court order to access those records. The House bill, however, requires no prior judicial approval; a judge would rule on the request after the FBI submits it to the telecommunications company.
“It’s absolutely crucial to understand the details of how these things will work,” the ACLU’s Kaufman said in reference to the “new court order” mentioned in the New York Times report. “There is no substitute for robust Democratic debate in the court of public opinion and in the courts. The system of oversight is broke and issues like these need to be debated in public.”
Phone metadata and dragnet collection of digital data from Internet providers and other technology companies is supposed to be used to map connections between foreigners suspected of terrorism and threatening the national security of the U.S. The NSA’s dragnet, however, also swept up communication involving Americans that is supposed to be collected and accessed only with a court order. The NSA stood by claims that the program was effective in stopping hundreds of terror plots against U.S. interests domestic and foreign. Those numbers, however, quickly were lowered as they were challenged by Congressional committees and public scrutiny.
“The president said the effectiveness of this program was one of the reasons it was in place,” Kaufman said. “But as soon as these claims were made public, journalists, advocates and the courts pushed back and it could not withstand the scrutiny. It’s remarkable how quickly [the number of] plots turned into small numbers. The NSA was telling FISC the program was absolutely necessary to national security, but the government would not go nearly that far in defending the program. That shows the value of public debate and an adversarial process in courts.”