Threatpost for B2B
The ICS-CERT is warning users about a reflected cross-site scripting vulnerability in a control interface for a wind-farm control portal manufactured by Nordex. The bug is remotely exploitable and could enable an attacker to run code on a vulnerable machine.
The Nordex NC2 is a control portal for a series of wind turbines manufactured by the company. Nordex Control 2 enables a user to control the settings and operations of wind turbines remotely. A researcher named Darius Freamon discovered a reflected XSS vulnerability in the software and published some details of it in the fall. ICS-CERT’s advisory says that the disclosure was not coordinated with the vendor or the CERT.
“NCCIC/ICS-CERT is aware of a public report of a Cross-Site Scripting vulnerability affecting the Nordex Control 2 (NC2) application, a supervisory control and data acquisition/human-machine interface (SCADA/HMI) product. According to this report, the vulnerability is exploitable by allowing a specially crafted request that could execute arbitrary script code. This report was released without coordination with either the vendor or NCCIC/ICS-CERT,” the advisory says.
The vulnerability was originally disclosed in October, but no fix has been made available and the details of the bug are available on the OSVDB site, as well.
“Nordex NC2 Wind Farm Portal contains a flaw that allows a reflected cross-site scripting (XSS) attack. This flaw exists because the application does not validate the ‘userName’ parameter upon submission to the /login script. This may allow an attacker to create a specially crafted request that would execute arbitrary script code in a user’s browser within the trust relationship between their browser and the server,” the OSVDB advisory says.
Nordex NC2 is a software application that gives users a portal to control the wind turbines they manage and receive data and reports from them. The researcher discovered the portal to be accessible on the Shodan search engine.
Image from Flickr photos of Robert Sharp.
Developers at Debian today informed users still clinging to Iceape – an Internet suite modeled on old Mozilla code – that they are cutting the cord and will stop supplying the software with security updates.
Iceape is more or less a Debian-branded hybrid of several community-driven entities, including browser, email, chat and news clients. The suite is loosely based on SeaMonkey, a suite that is in turn is based on code from the original but now defunct Mozilla Application Suite.
In an email sent to users Monday, Moritz Muehlenhoff, a Germany-based Debian developer and part of the Debian Security Team informed users of the change.
“Security support for Iceape, the Debian-branded version of the Seamonkey suite needed to be stopped before the end of the regular security maintenance life cycle,” read the email.
Debian meanwhile will continue to provide security updates to two clients within Iceape, both IceWeasel, based on Mozilla’s Firefox and IceDove, based on Mozilla’s Thunderbird.
Muehlenhoff is encouraging users to either migrate to those platforms as they’re essentially based on the same Seamonkey codebase or switch to binaries provided by Mozilla.
Iceape’s initial incarnation, the Mozilla Application Suite was abandoned in 2006 when Mozilla announced it planned to focus its time on Firefox and Thunderbird. The suite was revived later that year in SeaMonkey, a new group of programs that users produced using Mozilla source code. Debian later changed the name to Iceape to comply with Mozilla’s trademark license.
Iceape’s users are a small but dedicated bunch; loyalists stuck with the product through its synthesis as Netscape Navigator, Mozilla and Mozilla’s Application Suite. Iceape’s development had become stagnant over the last several months though and many supporters lamented the suite’s lack of updates, the last of which came in February.
Debian announced over the summer that it would stop backporting stable security fixes to Iceweasel, Icedove and Iceape, claiming it didn’t have the developer resources to patch products running on old Mozilla code.
Instead, the group claimed it would provide new releases on an ‘Extended Support Release branch,’ basing future package updates on code from Firefox 17.
The group foreshadowed the eventual end of security support for Iceape in that same notice saying it would “announce the end of security support for Iceweasel, Icedove and Iceape in Squeeze in the next update round.”
In this case that next update round appears to be now and Iceape looks as if it’s the first domino to fall.
UPDATE: A previous version of this story inaccurately stated that Horizon Blue Cross Blue Shield of New Jersey was not providing free credit monitoring to those affected by the breach.
On November 4, someone broke into the offices of Horizon Blue Cross Blue Shield of New Jersey and stole two laptops containing the sensitive information of more than 800,000 members.
The medical insurance provider claims that the machines were locked to an employee workstation inside Horizon’s Newark headquarters but that cable-locks were not enough to prevent the theft. Horizon Blue Cross Blue Shield New Jersey said that the laptops are password protected but also admitted that they had failed to encrypt them.
The insurance provider says that the stolen machines may have contained member names, addresses, dates of birth, Horizon Blue Cross Blue Shield of New Jersey identification numbers, Social Security numbers, and clinical information.
As is so often the case when an organization exposes the sensitive information of its customers, Horizon Blue Cross Blue Shield of New Jersey claims that they have no reason to believe that the thieves targeted the stolen laptops because of the information stored within them.
“Due to the way the stolen laptops were configured, we are not certain that all of the member information contained on the laptops is accessible,” the company said in a press release. “We have been working with law enforcement, but to date, have been unable to locate the laptops.”
The company says that it began contacting those individuals potentially affected by the breach on December 6. The insurance provider confirmed that they will also provide free credit monitoring services to all those potentially affected by the breach. It has set up a dedicated hotline though and is urging customers that believe they may have been impacted to contact them immediately if they have not yet received a letter from the company.
“To help prevent something like this from happening in the future, we are strengthening our encryption processes and enhancing our policies, procedures and staff education regarding the safeguarding of company property and member information.”
The U.S. Department of Energy has thrown back the covers on a July breach that exposed the personal information of more than 104,000 individuals, painting a less than flattering portrait of IT and agency management failures around vulnerability management, access controls and a general lack of communication between decision makers.
Hackers were able to penetrate a Web-facing application and steal personal information on 104,179 current and former employees, dependents and contractors. They were able to get in quietly and get out with names, addresses, Social Security numbers, dates of birth and bank account information, all in unencrypted data formats.
Worse, the DOE failed to live up to industry standards and government mandates around not only encryption of sensitive data, but using Social Security numbers as identifiers, running IT systems with unpatched critical vulnerabilities and outdated software.
The most damning aspect to the breach was that a $4,200 software update, purchased in March, that would have prevented the breach and instead sat for five months in a testing environment, cost significantly less than the expected $3.7 million price tag for credit monitoring and other recovery costs.
“In spite of a number of early warning signs that certain personnel-related information systems were at risk, the Department had not taken action necessary to protect the PII of a large number of its past and present employees, their dependents and many contractors,” Inspector General Gregory Friedman wrote in a special report released last week.
The report said numerous technical and management shortcomings conspired to facilitate the breach, which was the third at the agency since 2011, yet the first to result in significant data loss.
Hackers were able to access the Web-facing Management Information Systems (MIS) front end for the DOE Employee Data Repository, also known as the DOEInfo database. In addition to personally identifiable information, hackers made off with HSPD12 badge information as well as security question-and-answer information used for password resets.
In addition to patching failures, decision makers in the Office of the CIO and Office of the CFO—the business owners of the MIS and DOEInfo systems—knew little of what the other was doing. The two systems, despite being interconnected, were not integrated securely, the report said. Systems that had reached end-of-life were still running, including the compromised MIS front end. The real kicker is that the Office of the CIO had not certified—as required—or provided MIS with the authorization to operate.
Neither office was completely aware of their respective operating environments or of system inventory. Critically vulnerable systems were not patched in a timely fashion and of the 30 systems integrated with the DOEInfo database, two of those were no longer used, including one that was breached that was still storing personal data.
“Officials told us that they lacked the authority to impose restrictions on system operation or take other corrective measures when known security vulnerabilities were not addressed,” the report said, citing competing priorities as a contributor to the circumstances that facilitated the attack. “We could not determine with certainty whether the lack of authority, in all instances, was real or only perceived.”
One glaring example: the Office of the CIO said system owners prohibited timely patching because downtime would interfere with productivity; those same system owners, meanwhile said security issues were sent to the CIO office, which never responded.
“We found that communication issues within the OCIO likely contributed to the recent breach,” the report said. “Specifically, system anomalies discovered by an application developer and reported to the OCIO prior to the breach were not fully investigated prior to being corrected. In this case, we question the thoroughness of Department’s analysis of the reported anomalies.”
As for the attack, the report concluded that the hackers used readily available exploits to get past the MIS front end and attack the DOEInfo database. The report cited evidence the MIS front end had not been patched in years, and that an operating system utility and third party developer application had not been updated since 2011. The vulnerability exploited by the hackers, meanwhile, was identified by the vendor in question in January.
The Office of the CIO, which is responsible for patching at the agency, said it purchased a software update for the MIS front end in March, but functionality issues with interconnected systems left it in a test environment and prevented it from being deployed. Numerous Inspector-General reports throughout the years have pointed out shortcomings with vulnerability management in the agency; scanning and monitoring on this particular app were not done until March and the vulnerabilities were ignored, the report said.
“The Department can begin to rebuild trust by revamping its headquarters’ cyber security program and control environment, enhancing communications and coordination in a number of areas related to cyber security and safeguarding PII and moving away from the ‘stove piping’ approach to managing information systems and data,” the report said, adding recommendations that include identifying all externally facing systems, implementing continuous monitoring, removing unnecessary information from the DOEInfo database, and encrypting sensitive data.
A United States District Court judge has ruled that the bulk metadata collection program maintained by the National Security Agency for years now likely is unconstitutional. The judge, ruling on a pair of law suits that claimed the NSA’s methods violated users’ privacy and civil rights, said that the metadata program “significantly intrudes on that expectation” of privacy.
Judge Richard J. Leon of the U.S. District Court for the District of Columbia on Monday granted a preliminary injunction in a pair of related suits filed by Larry Klayman and a co-plaintiff who asserted that the NSA metadata collection program violated their expectation of privacy. The metadata program allows the agency to require Verizon and other phone providers to hand over the call records of millions of subscribers on a regular basis. Klayman and his co-plaintiff are both Verizon customers and Leon said in his ruling that the NSA program violates their expectation of privacy.
Leon’s injunction prevents the NSA from collecting any more records pertaining to Klayman and Charles Strange and also requires the agency to destroy any records it already has relating to those two customers. However, Leon also stayed his injunction pending an appeal by the government. The ruling is the first one that puts a dent in the NSA’s armor as it applies to the intelligence collection methods. The agency has asserted that its metadata and other programs are all supported by legal authority and done with proper legal oversight. In his ruling, Leon said that the metadata program amounts to a large-scale invasion of privacy.
“I cannot imagine a more ‘indiscriminate’ and ‘arbitrary invasion’ than this systematic and high-tech collection and retention of personal data on virtually every single citizen for purposes of querying it and analyzing it without judicial approval,” Leon wrote in his ruling.
The judge said that the NSA’s metadata collection likely violates the Fourth Amendment. He said he also expects that the government will appeal this ruling but that he rejects the government’s assertion that removing the plaintiffs from the metadata database could lead to similar assertions from other people in the future. That argument is not the point, Leon said.
“For reasons already explained, I am not convinced at this point in the litigation that the NSA’s database has ever truly served the purpose of rapidly identifying terrorists and in time-sensitive investigations, and so I am certainly not convinced that the removal of two individuals from the database will ‘degrade’ the program in any meaningful sense,” Leon wrote.
Image from Flickr photos of DonkeyHotey.
Attackers have been automating SQL injection attacks for a number of years, but in a fairly new twist, a botnet masquerading as a Firefox browser add-on is carrying out attacks on sites visited by compromised computers.
Krebs on Security reported today that the Advanced Power botnet has been operational since May and has infected 12,500 victims and targeted close to 2,000 websites.
Alex Holden, CEO of Hold Security LLC who assisted blogger Brian Krebs with the investigation, told Threatpost this is the first time he’s seen a botnet automate SQL injection attacks; other known infections have automated searches for vulnerable websites, for example.
“We don’t have any evidence of actual theft. All these guys are doing through the botnet is finding SQL vulnerabilities,” Holden said. “I would assume the bad guys are looking at logs and figuring out which sites are vulnerable versus false positives, and they go through this and exploit the sites themselves.”
Holden and Krebs are unsure how victims are initially compromised, but the bots are spread via phony add-on called Microsoft .NET Framework Assistant, which is very different from a legitimate add-on of the same name.
Krebs wrote the malware is using compromised Windows machines as a scanning platform for websites vulnerable to SQL injection attacks. The botnet automates this probing, which is generally a time-consuming manual process. Holden said that a penetration tester, for example, would normally test the open variables on a website with any number of benign SQL statements, something this botnet is doing on a much larger scale.
“SQL statements by themselves to a normal application would look like garbage. However if they get interpreted by a SQL server, we can see some of the results coming back,” Holden said in explaining the process. “The key for programmers is to never allow end users to interact directly with the SQL server. That’s the problem with SQL injection because once you can interact with the SQL server you can ask anything the server has to come back to you.”
In this case, Holden said, the attackers have programmed in SQL command that asks for a five-second delay before returning data.
“For the bad guy, it is an indicator,” Holden said. “Because it’s structured for SQL injection, it would introduce the right input and output. If you see a five-second delay, the bad guys know the SQL server is executing the command and not the application itself.”
Holden said the 1,800 sites vulnerable to SQL injection found by the botnet don’t have a typical profile, and range in size and focus. As visitors infected with the add-on malware surf from site the site, the malware in the background is conducting tests to determine if there is an exploitable vulnerability present. Krebs said, also, that there is a password-grabber in the malicious code, but it has not been activated. The malware, he added, has been analyzed by malwr and VirusTotal.
“Ultimately the bad guys have a road map,” Holden said. “Instead of scanning the whole Internet, they have 1,800 targets presented to them in an easy way.”
It’s taken more than six months, but top officials at the National Security Agency are finally discussing some of the details of how former agency contractor Edward Snowden got access to all of the documents he stole and what kind of damage they believe the publication of the information they contain could do. A senior NSA employee tasked with investigating what Snowden did and how he did it said that Snowden simply used the legitimate access he had as a systems administrator to steal and store the millions of documents he’s been slowly leaking to the media, and that the information in those documents could give U.S. enemies a “road map” of the country’s intelligence capabilities and blind spots.
The data that Snowden has leaked over the course of the last few months has detailed a variety of programs run by the NSA to collect intelligence on foreign citizens and governments, including the phone metadata program, PRISM and many others. Those programs are at the heart of what the agency does, and intelligence officials and lawmakers have decried their publication, saying that the leaks could make the U.S. intelligence community less effective and give foreign adversaries a detailed blueprint of how to avoid U.S. eavesdropping systems and other intelligence assets.
In an interview broadcast Sunday by 60 Minutes, the NSA official who is heading up the task force investigating the Snowden leaks said that the former contractor essentially used the legitimate access he had for his job to steal the information.
“So, the people who control that, the access to those machines, are called system administrators and they have passwords that give them the ability to go around those security measures and that’s what Snowden did,” said Rick Ledgett, a 25-year NSA employee who was part of a much-criticized 60 Minutes piece on the agency’s response to the Snowden scandal and its collection methods.
Ledgett said that Snowden, who worked in an NSA office in Hawaii, simply grabbed everything he wanted and moved it to another location where he could then download it.
“He did something that we call scraping, where he went out and just used tools to scrape information from web sites, and put it into a place where he could download it,” Ledgett said in the interview.
The volume and scope of what Snowden has taken has been a topic of much discussion among security experts and government officials, as the reporters to whom he has leaked the documents have said that only a tiny percentage of the information has been published. Ledgett didn’t specify a number but said that he wouldn’t argue with the estimate of more than 1.7 million documents that has been used. The thing that worries him the most, Ledgett said, is that the publication of some of the data on exactly what the NSA and the rest of the U.S. intelligence community knows about other countries’ activities and what its limitations are.
“It’s an exhaustive list of requirements that have been levied against the National Security Agency. It would give them a road map of what we know and what we don’t know,” he said, “and give them, implicitly, a way to protect their information from the U.S. intelligence community’s view.”
In the same interview, NSA Director Gen. Keith Alexander defended the agency’s methods and said that it does not deliberately target the communications of huge numbers of Americans, but rather only does what is allowed under the authorities granted to it by the courts.
“There’s no reason that we would listen to the phone calls of Americans. There’s no intelligence value in that. There’s no reason that we’d want to read their email. There is no intelligence value in that,” Alexander said.
Google’s decision to automatically display images in Gmail messages has security experts on edge about the privacy and security implications of the move. Of particular concern is the ability of an attacker, or marketer, to learn whether messages are being opened, as well the possibility of an attacker spiking an image URL with additional attacks that could lead to denial of service conditions or worse.
“Any image URL in the email is now requested by Google’s servers. This may allow some malicious behaviors to be automated just sending image-laden messages to dozens of random Gmail account holders,” said HD Moore, CSO at Rapid7 and creator of the Metasploit Framework, in email to Threatpost. “For example, some Web application flaws can be exploited simply by requesting a URL. Granted, this is no different than viewing a webpage or displaying images manually, but due to the automatic’ loading of the image URL, it becomes a much more practical attack.”
Google product manager John Rae-Grant said yesterday that Gmail will serve images through its proxy servers, which will scan image files for malware before they’re displayed on the user’s end.
“You’ll never have to press that pesky ‘display images below’ link again, Rae-Grant said. “Similar to existing features like default https access, suspicious activity detection, and free two-step verification, image proxying is another way your email is protected.”
While images may arrive free of malware, experts caution there are privacy implications to consider too that could threaten personal safety as well as invite unwanted product marketing.
“There are two ways this could be used by malicious actors depending on how it is architected. First, it can be used to track users more effectively, because images are always enabled,” said Robert Hansen, Director of Product Management for WhiteHat Security. “However, if the images are pulled instantly, as opposed to pulled when the user opens the email, it opens up the possibility of mass denial of service attacks by Google if a spammer sends enough email to his victims with unique URLs that Google must go and fetch.”
Moore said Google could solve the tracking problem if Gmail were to cache images as email is received before the user reads the message. But there’s a hitch there too.
“It does open the door to malicious request proxying in a much more aggressive form,” Moore said. “There would be ways to avoid or mitigate these issues (request limiting, etc), but it would create additional work for Google.”
Moore said he tested the issue by sending a HTML email to his Gmail account that included an <img> tag pointing to one of his Web servers. Moore said the image was proxied through Google’s servers, and every time he opened the email and clicked “Display Image,” Google would send a new request to the web server.
“Google has stated that they will be caching images as well, but that doesn’t seem to the case right now,” Moore said. “Caching would prevent the same image from being loaded more than once, but it doesn’t prevent tracking techniques that use unique images per target.”
Moore added that when Gmail starts displaying images automatically—Google said the move is immediate on the desktop and expected to roll out early 2014 on mobile apps—read-tracking would be enabled by default.
“This would allow a stalker or other malicious entity to determine whether the email they sent to a target is being read,” Moore said.
Google’s decision, he said, also makes it possible to enumerate active email accounts by sending email with tracking images, a simple test to determine whether accounts are dormant.
Hansen argues that the user benefits little from the decision other than perhaps Gmail messages loading faster and that the user’s IP address is not sent to the remote server. But like Moore, he speculates about Google’s motive.
“This could actually just be the opening act of an additional privacy-destroying business model, where Google charges bulk email advertisers for information about email open rates,” he said.
Google has provided instruction on how to change the default setting here and have Gmail ask before displaying images.
Since its inception in 2009, the U.S. Cyber Command has been run by the director of the National Security Agency. The two organizations are intertwined and even share the same space in Maryland. The continuous leaks of NSA documents this year has led some politicians and critics to argue that the two should be separated, but it appears that the Obama administration has rejected this idea.
The dual leadership role, which is now held by Gen. Keith Alexander, has been controversial since the beginning, when Cyber Command was established. The unit is a part of the U.S. Strategic Command and is staffed by military personnel. It works closely with the rest of the military, as well as with the intelligence community, specifically the NSA. Alexander plans to retire from the NSA early next year, and has said that he’d like the dual NSA-Cyber Command role to continue after he’s gone.
Some in the intelligence community–and many in the privacy and security communities–have said that the two jobs should be split, as the current situation gives too much power to one person. However, the White House has decided that it’s better to keep the NSA director as the director of Cyber Command, as well. A White House spokeswoman told the Washington Post today that the two roles are closely aligned and it’s important to keep them together.
“NSA plays a unique role in supporting Cyber Command’s mission, providing critical support for target access and development, including linguists, analysts, cryptanalytic capabilities, and sophisticated technological infrastructure,” Caitlin Hayden, a White House spokeswoman, told the Post. “Without the dual-hat arrangement, elaborate procedures would have to be put in place to ensure that effective coordination continued and avoid creating duplicative capabilities in each organization.”
Cyber Command is responsible for much of the offensive and defensive security operations run by the United States military, and it works closely with intelligence agencies.
Users of Apple’s Safari browser are at risk for information loss because of a feature common to most browsers that restores previous sessions.
The problem with Safari is that it stores session information including authentication credentials used in previous HTTPS sessions in a plaintext XML file called a Property list, or plist, file.
The plist files, a researcher with Kaspersky Lab’s Global Research and Analysis Team said, are stored in a hidden folder, but hiding them in plain sight isn’t much of a hurdle for a determined attacker.
“The complete authorized session on the site is saved in the plist file in full view despite the use of https,” said researcher Vyacheslav Zakorzhevsky on the Securelist blog. “The file itself is located in a hidden folder, but is available for anyone to read.”
Zakorzhevsky said Kaspersky Lab has notified Apple of the vulnerability; he added that he is unaware of any active exploits targeting the information stored in a plist file.
“We’re ready to bet that it won’t be long before it appears,” Zakorzhevsky said.
Hackers have made short work of browser vulnerabilities for years in order to hijack sessions and steal data sent through the browser. An attacker who builds code to land on a victim’s browser and restore a previous session would have unobstructed access to anything the user was doing at the time, including social networking, online banking or any other potentially sensitive transaction.
“The system can easily open a plist file,” Zakorzhevsky said. “It stores information about the saved session—including http requests encrypted using a simple Base64 encoding algorithm—in a structured format.”
Zakorzhevsky said the Reopen All Windows from Last Session, which can be found in the dropdown menu from the History tab on Safari, will open sites exactly as the user left them in the previous session. They are stored in a plist filed called LastSession.plist, Zakorzhevsky said.
Zakorzhevsky added that Mac OS X 10.8.5 and 10.7.5 support Safari 6.0.5, which hosts the functionality.
“You can just imagine what would happen if cybercriminals or a malicious program got access to the LastSession.plist file on a system where the user logs in to Facebook, Twitter, LinkedIn or their online bank account,” Zakorzhevsky said. “As far as we are concerned, storing unencrypted confidential information with unrestricted access is a major security flaw that gives malicious users the opportunity to steal user data with a minimum of effort.”
The NSA surveillance scandal has created ripples all across the Internet, and the latest one is a new effort from the IETF to change the way that encryption is used in a variety of critical application protocols, including HTTP and SMTP.
The new TLS application working group was formed to help developers and the people who deploy their applications incorporate the encryption protocol correctly. TLS is the successor to SSL and is used to encrypt information in a variety of applications, but is most often encountered by users in their Web browsers. Sites use it to secure their communications with users, and in the wake of the revelations about the ways that the NSA is eavesdropping on email and Web traffic its use has become much more important. The IETF is trying to help ensure that it’s deployed properly, reducing the errors that could make surveillance and other attacks easier.
“There is a renewed and urgent interest in the IETF to increase the security of transmissions over the Internet. Many application protocols have defined methods for using TLS to authenticate the server (and sometimes the client), and to encrypt the connection between the client and server. However, there is a diversity of definitions and requirements, and that diversity has caused confusion for application developers and also has led to lack of interoperability or lack of deployment. Implementers and deployers are faced with multiple security issues in real-world usage of TLS, which currently does not preclude insecure ciphers and modes of operation,” the description in the working group’s charter says.
There have been a number of attacks developed against SSL and TLS in recent years, and there have been reports that the NSA has some unspecified capabilities to defeat SSL, as well. On the Web, there are plenty of ways that encryption can be implemented incorrectly by site owners, but the Web isn’t the only focus of the new working group. The group also will consider ways to improve the usage of TLS in other applications, specifically email.
As part of its work, the group plans to update the definition for using TLS to communicate with proxies, as well as sever-to-server traffic and peer-to-peer traffic. The working group also plans to develop a set of best practices for using TLS, which may include the usage of forward secrecy and which versions of TLS developers should implement.
“The initial set of representative application protocols is SMTP, POP, IMAP,XMPP, and HTTP 1.1. It is expected that other protocols that use TLS might later be updated using the guidelines from this WG, and that those updates will happen through other WGs or through individual submissions. The WG will make the fewest changes needed to achieve good interoperable security for the applications using TLS,” the group’s charter says.
UPDATE: Google has removed a pivotal privacy feature from its Android operating system that gave users the ability to deny permissions in and regulate information collection attempts by installed applications.
The feature, which users could control with a tool called AppOps Launcher, first appeared in Android 4.3. Just two days ago the Electronic Frontier Foundation published an article heralding the short-lived privacy control as “a huge step in the right direction.”
“Despite being overdue and not quite complete, App Ops Launcher is a huge advance in Android privacy,” wrote EFF technology projects director Peter Eckersley. “Its availability means Android 4.3+ [is] a necessity for anyone who wants to use the OS while limiting how intrusive those apps can be.”
As it turns out, Google removed the feature in Android version 4.4.2, the mobile operating system’s most recent update. When asked why, Google told the EFF that the control was an experimental one which they introduced into Android by accident. Furthermore, the search giant claimed that the permission-throttling privacy feature was breaking some of the applications it attempted to manipulate.
In a comment on Google Plus, Google Android engineer Dianne Hackborn explained:
“That UI is (and it should be quite clear) not an end-user UI. It was there for development purposes. It wasn’t intended to be available. The architecture is used for a growing number of things, but it is not intended to be exposed as a big low-level UI of a big bunch of undifferentiated knobs you can twiddle. For example, it is used now for the per-app notification control, for keeping track of when location was accessed in the new location UI, for some aspects of the new current SMS app control, etc.”
Eckersley claims that Google opened up an enormous privacy hole by removing the feature; a hole that Android’s primary competitor, Apple’s iOS, reportedly sealed off years ago. In order to remedy the loss, Eckersley claims that Google must not only reenable the privacy control, but also add to it.
Users, he writes, should have the capacity to disable the collection of any trackable identifiers with a single control. Android should also empower users with the ability to cut off network access to any applications they choose in order to combat developers that would otherwise collect sensitive data frivolously.
The EFF was so enamored with the feature that it finds itself stuck in the middle regarding what users should do. On the one hand, privacy conscious users may want to avoid updating so that they can keep the feature. On the other hand, the Android 4.4.2 update resolved a nasty SMS-based denial of service vulnerability and other security issues as well.
“So, for the time being,” Eckersley wrote, “users will need to chose between either privacy or security on the Android devices, but not both.”
European diplomats and ministries of foreign affairs have been targeted during recent G20 meetings by Chinese-speaking hackers conducting espionage campaigns using malware to siphon secrets from compromised computers.
The latest incidents came in August when spear phishing messages spiked with attachments promising information on U.S. military options in Syria zeroed in on diplomats and foreign ministers prior to the G20 Russia Summit in St. Petersburg in September.
Researchers at security company FireEye infiltrated a command and control server used in this campaign and observed communication between 21 compromised machines and the C&C server; nine of the compromised machines were beaconing back from ministries in five European countries and eight from ministries of foreign affairs. The remainder of the connections were made either by the attackers or security researchers.
Once on a victim’s machine, the attackers were able to use a variety of malicious code samples to not only steal data but also legitimate credentials in order to move laterally on the victim’s networks seeing more vulnerable systems and exposed data.
The attacks, which FireEye said have been active since 2010, have also been used against targets in aerospace, energy, government, high tech, consulting and services, chemicals, manufacturing and mining industries. The lures have been target-specific as well; in separate campaigns, the London Olympics of 2012 as well as the promise of illicit photographs of French first lady Carla Bruni were themes.
The spear-phishing emails are laced with links to sites hosting malware downloads or malicious attachments—a cocktail of malicious screensavers, Java, Microsoft Word and Adobe PDF exploits, some dating back to 2010.
FireEye estimates there were as many as 23 command and control servers used in the G20 Russia campaign, dubbed Ke3chang, in a complicated, well thought-out campaign targeting high-profile, influential government officials.
“The scarcity of individual attacks indicate the attackers are selective about their targets,” said Nart Villeneuve, a researcher with FireEye, adding that the company has already been in contact with relevant authorities about the attacks.
The malware used by the Ke3chang attackers has evolved; FireEye believes there are three distinct signposts where malware changed and improved with additional features and capabilities.
“We believe these three types of malware are an evolution of a single project from a single developer or small team of developers sharing code,” Villeneuve said, adding that the attacks not only establish a backdoor connection, but enables the attackers to upload more malware, download files, run shell commands and even put the attack to sleep if so desired. All of the communication is done over HTTP, he said.
The current version of the campaign, called BS2005, capitalized on the possible U.S. military intervention in Syria late this summer. The attackers packaged the malware in a ZIP filed called “US_military_options_in_Syria.pdf.zip” that contained an executable of the same name. The executable was a loader that dropped an executable called ie.exe that was compiled in July that acted as the backdoor calling to an IP address at 122[.]10[.]83[.]51. The samples also contained tags that allowed the attackers to monitor victims. The attackers also took great care to disrupt any attempts by security researchers to analyze the malware or the campaign. For example, the malware kills any processes related to maxthon, a free Chinese browser, or 360se, a free Chinese antivirus product.
In addition to this summer’s G20 campaign, the same campaign targeted the 2012 London Olympics, targeting a single chemical manufacturer with a phony PDF schedule of the Summer Games, as well as the 2011 Paris G20 Summit, this time promising nude pictures of Bruni, the wife of French president Nicolas Sarkozy.
An older campaign, called MyWeb, targeted security and defense industries and introduced an anti-sandboxing feature as well as a sleep value relative to the malware’s ability to beacon back to C&C servers.
All three malware families used by this gang used domains from dynamic DNS providers for their command infrastructure and all share common IP addresses. FireEye’s mapping and correlation of those addresses leads its researchers to believe there could by as many as 99 C&C servers, largely in the U.S., China and Hong Kong. Once on a victim’s machine, the script is similar: the malware gathers system and network information and uses a number of malicious executables to steal credentials and attempt to move laterally on the network. It also has the capability of grabbing network group information and looks specifically for domain administrators and those in charge of system access.
FireEye said the attacks against diplomats continue.
“This report demonstrates that attackers are able to successfully penetrate government targets using exploits for vulnerabilities that have already been patched and despite the fact that these ministries have defenses in place,” Villeneuve said. “This illustrates the limitations of traditional defenses and highlights the need for security strategies that not only leverage advanced technologies designed to defend against targeted threats, but also the incorporation of threat intelligence and an incident response capability.”
Download Fireeye’s report here.
One good way to measure the popularity of an emerging technology or trend is to see how much attention attackers and malware authors are paying it. Using that as a yardstick, Bitcoin is moving its way up the charts in a hurry. The latest indication is some malware that researchers at Arbor Networks identified that is masquerading as a utility to alert Bitcoin owners of shifts in the currency’s value, but is actually marked as a Trojan.
The utility, named Bitcoin Alarm, is being sent out right now via email and has the ability to find and allegedly take victims’ Bitcoins. Researchers at Arbor found the utility in several spam messages they received, and their initial investigation found that there are several layers of deception and obfuscation in the file that’s downloaded and its behavior is a little difficult to analyze at first. Perhaps that’s why only a handful of antimalware applications are able to identify it as malicious.
“The download BitcoinAlarm.exe (MD5: edfa12d4a454b0eb786bbe92050ab88a) had just 1 hit on VirusTotal when I first scanned it (from Kaspersky). Is it a false positive on a nice free tool? Lets dig deeper,” Kenny Macdermid of Arbor wrote in an analysis of the malware.
The downloads includes an RAR archive that includes a script that has a file called “winupdate.exe”.
“A quick check of winupdate.exe with VirusTotal shows that it’s the valid (and non-malicious) AutoIt executable. AutoIt is a great little scripting language for Windows, it’s especially useful for automating GUI related tasks. So if winupdate.exe is AutoIt that would make 5943564.IFW an AutoIt script. It looks like it was obfuscated somewhat though,” Macdermid said.
One of the things the script does is check to see whether there’s a specific antimalware application running, and if so, it will sleep for 20 seconds. The check for running antimalware is a classic behavior of a malicious application, and Macdermid said that after the check is completed the app performs a number of other operations designed to disable security functionality. The app then decrypts and runs a file named 20070.RQT.
“The decrypted file had 30/48 hits of VirusTotal when I scanned it (MD5: 224c73f8172123e5ddca2302425664a6). It’s called NetWiredRC and is a remote access trojan made for stealing login information, and likely in this case being used to steal Bitcoins. It connect to bitcoins.dd-dns.de on port 3360,” Macdermid said.
The link to download the Bitcoin Alarm app is now returning a 404 error and Macdermid said that many more antimalware tools are now detecting it as a piece of malware.
Google has patched a previously disclosed issue in its Nexus line of phones that could have opened users up to a nasty series of SMS-based denial-of-service attacks.
The company pushed the fix out alongside version 4.4.2 of Android on Monday to the Nexus 4, 5, 7 and 10 devices.
According to FunkyAndroid.com, a British site that parses Android Open Source Project code each Android update and creates a changelog, 4.4.2 brings a fix for “d00f7cd : Android denial of service attack using class 0 SMS messages.”
The denial of service attack was first brought to light by researcher Bogdan Alecu at a security conference in Bucharest, Romania at the end of November. Before the update an attacker could have sent a barrage of Flash, or Class 0 SMS messages to a Nexus device and cause it to restart, freeze or lose its connection to the mobile internet.
Those Flash SMS messages previously just piled up, one after the other on device screens and led to the aforementioned problems. Going forward the messages will be displayed one at a time and queued until users dismiss them.
Google’s fix comes almost six months after Alecu claims the company promised it to him and more than a year after Alecu, who also works as a system administrator at the Dutch IT firm Levi9, found the bug initially.
The 4.4.2 update also fixes a separate denial of service vulnerability that stemmed from when devices received 0-byte WAP push messages and brought a few cosmetic changes to devices including several camera tweaks like better white balancing, less shutter lag, and more accurate focusing.
The makers of a popular Android flashlight application have settled with the Federal Trade Commission over allegations that they covertly tracked the locations of the “Brightest Flashlight Free” users and sold that information to advertising firms.
The settlement resolves a complaint filed by the FTC against the company that developed the application, Goldenshore Technologies LLC, and the man that manages that company, Erik M. Geidl.
The FTC claims in its complaint that while the app purported to act as a flashlight by activating a device’s camera flash function, it also transmitted – or permitted the transmission of – various device information to third parties, including advertising networks. That information transmitted allegedly included geolocation and persistent device identification information.
Furthermore, the Brightest Flashlight Free app begins collecting device-information while users are viewing but before they have accepted the terms and conditions laid out in the EULA. In other words, the applications even collects information about users that refuse to accept the its terms and conditions.
Because of all of this, the FTC claims that Goldenshore Technologies were engaged in unfair and deceptive acts and further in violation of the Federal Trade Commission Act.
“When consumers are given a real, informed choice, they can decide for themselves whether the benefit of a service is worth the information they must share to use it,” said Jessica Rich, Director of the FTC’s Bureau of Consumer Protection in a statement. “But this flashlight app left them in the dark about how their information was going to be used.”
Pending full approval from the FTC, the settlement will prohibit Goldenshore Technologies from misrepresenting what user-information they collect and how they use that information. The company now must obtain clear consent from users before collecting such information. Goldenshore Technologies must also delete all personal information they have collected from app-users to this point. Goldenshore technologies may also be forced to pay $16,000 for each of the counts the FTC claims it violated.
With each day bringing new information about the way that intelligence agencies and law enforcement are tracking the activities and movements of U.S. citizens, the issue of when these organizations can legally obtain such data has become a major one. Now, a case that seemingly has little connection to the surveillance debate has attracted the attention of privacy and civil rights advocates and could become a key factor in the way that law enforcement agencies have to handle cell phone location data.
Most convicted bank robbers would have little expectation of finding themselves on the same side of the fence as the lawyers at the EFF, but that’s what’s happened to Kendall O. Smith. Arrested for robbing a bank in Connecticut, Smith was facing many years in prison if convicted. During the course of their investigation, police found themselves needing access to Smith’s cell phone location data in order to connect him to the crime. Officers went to a judge and, as required by state law, got approval to retrieve six months worth of information, including location data.
“Even though the government went to a judge to get authorization to get the records, they didn’t get a search warrant. Instead, both federal and Connecticut state law authorize police to obtain cell phone location records with a showing less than the probable cause required to obtain a warrant. The trial court found the records were obtained properly and Smith was convicted and sentenced to 55 years in prison,” Hanni Fakhoury of the EFF wrote in a blog post explaining the organization’s reasons for filing an amicus brief on behalf of Smith in his appeal of the conviction.
“On appeal, Smith argues that the Fourth Amendment’s prohibition against unreasonable searches and seizures means the police must obtain a search warrant supported by probable cause to get cell site records. Our brief agrees, explaining how cell site records can reveal a person’s location with increasing precision, triggering an expectation of privacy and requiring police to obtain a probable cause search warrant in order to access this information.”
In its brief, the EFF argues that police should be required to obtain a search warrant, which is a small additional burden, because it would help protect users’ privacy. Cell phone location information can be used to track a user’s movements with great precision over a long period of time, giving viewers a window into what the user is doing and where he’s going. The EFF and other privacy advocates have argued that this data is among the most private information a user can generate and it should be protected at a high level.
“This case involves an important, disputed question that implicates the privacy of all Connecticut citizens: whether historical cell site location information (CSLI)–records collected and held by a cell phone company and capable of establishing a person’s location, is patterns of movement and ultimately his associations and affiliations–should be protected by the requirements of a search warrant,” the brief says.
“Little is more revealing than a person’s movements over time.”
The EFF brief explains that the huge expansion of cell-phone usage in recent years has led to a concurrent expansion in the networks of towers that providers maintain. This, along with the increased speeds of cellular networks, have greatly increased the precision with which someone can be tracked using CSLI. That means that the intrusions on users’ expectations of privacy can be that much greater, the EFF says.
“Long-term electronic surveillance poses the serious risk of upsetting the traditional relationship between citizen and state by avoiding what has long been the ‘greatest protection of privacy’: ‘practical’ restraints such as the cost and difficulty of maintaining long term, covert surveillance,” the brief says.
Image from Flickr photos of Razor512.
FreeBSD, the open-source operating system, announced that it will no longer support Intel’s RdRand and Via Technology’s Padlock on-chip random number generators (RNGs) moving forward in new versions of the UNIX-like operating system.
The move apparently follows reports from earlier this year that the National Security Agency had allegedly weakening cryptographic standards built in conjunction with the National Institute for Standards and Technology so that the NSA could circumvent them in order to perform its surveillance operations.
Citing a “high probability of backdoors” and mentioning Edward Snowden by name on a security working group site for the FreeBSD Developer Summit, the group says it “cannot trust [these hardware] RNGs to provide good entropy directly.” Instead, they plan on generating their random numbers with either the Yarrow or its successor Fortuna pseudo-RNGs, each of which is open-source and was developed by famed cryptographers Bruce Schneier, John Kelsey, and Niels Ferguson.
“For 10, we are going to backtrack and remove RDRAND and Padlock backends and feed them into Yarrow instead of delivering their output directly to /dev/random,” Free BSD’s developers wrote in a EuroBSDcon 2013 Developer Summit special status report on their website. “It will still be possible to access hardware random number generators, that is, RDRAND, Padlock etc., directly by inline assembly or by using OpenSSL from userland, if required, but we cannot trust them any more.”
RNGs are an integral aspect of key-creation for strong encryption. Crypto-systems with weak RNGs or PRNGs that don’t create suitably random numbers are considered weak cryptographic systems.
Moxie Marlinspike has published landmark research on SSL vulnerabilities, taken on certificate authorities and even built an alternative to CAs as we know them today called Convergence. But now that government surveillance and online privacy have been elevated to mainstream dinner-table conversations, the researcher has made a significant dent in the problem of bringing secure communication to the masses.
This week’s announcement that Open WhisperSystems’ TextSecure protocol will be integrated into CyanogenMod’s default SMS app means that upwards of 10 million Android users will be able to conduct chats online that are encrypted end-to-end, and theoretically out of reach of snoops and spies.
This has Marlinspike excited, and anxious to bring TextSecure and secure communications to more than just the Android platform; Open WhisperSystems has an iOS client and browser extension on the drawing board.
“As we expand our client base, we’ll be moving to this world where we have truly cross-platform, end to end secure communication with the really massive user base, which is really exciting,” Marlinspike told Threatpost. “This Cyanogen deployment is perhaps the largest deployment of end to end secure messaging ever.”
TextSecure, unlike other secure chat apps such as Silent Text, does not require both ends of the conversation to have an installed client. Nor are the encryption keys securing the chat sessions stored with Open WhisperSystems. That means the organization is not subject to government requests via warrants or National Security Letters for encryption keys or user data.
“That’s definitely happening and an important component of any secure communication system. You want the servers to be completely untrusted,” Marlinspike said. “People get very caught up in where servers are hosted and that really shouldn’t matter. Our position should be that there are really no good governments or safe regions where you can put a server. You have divide servers to be completely untrusted, and you have to have client software that is open source and anyone can verify the security.”
The partnership between the CyanogenMod and Open WhisperSystems began earlier this year when the aftermarket Android firmware provider approached Marlinspike about developing a secure messaging system for their users.
“Our position is one of building a business that is not based on collecting as much information as possible about the user,” Marlinspike said. “Seems like they’re trying to think of ways of improving the user’s default experience with respect to privacy.”
Marlinspike said the native CyanogenMod SMS client was modified to support the TextSecure protocol, and that TextSecure for CyanogenMod runs on the TextSecure V2 protocol and supports forward secrecy and the 3DHE agreement for deniable messages.
“If an outgoing SMS message is addressed to another CyanogenMod or TextSecure user, it will be transparently encrypted and sent over the data channel as a push message to the receiving device. That device will then decrypt the message and deliver it to the system as a normal incoming SMS,” Marlinspike said in the announcement. “The result is a system where a CyanogenMod user can choose to use any SMS app they’d like, and their communication with other CyanogenMod or TextSecure users will be transparently encrypted end-to-end over the data channel without requiring them to modify their work flow at all.”
While the Android rollout is slowly under way, the early feedback is encouraging.
“Mostly, the feedback that we’ve gotten is that it’s too invisible; people can’t even tell that it’s happening. They would like more visual feedback, which is a good problem to have and a good problem to start from. Rather than the opposite which is this is too cumbersome or impossible to use,” Marlinspike said. “Right now people are questioning whether it’s really working. ‘Yes it really is.’”
Visual feedback via some kind of icon or system notification is likely the next priority for the TextSecure-CyanogenMod integration, in particular getting the feedback in whatever form it takes to work with software such as Google Hangouts, for example, that is closed source.
Next off the line could be the iOS client, followed shortly thereafter by a client for Open WhisperSystems’ RedPhone secure voice app and a browser extension that would put Open WhisperSystems on its way to having encrypted cross-platform asynchronous messaging systems anchored by open protocols and open source software.
“We want truly cross-platform support, so that means iOS, Android and something for the desktop,” Marlinspike said. “If you can do something with a browser extension, then that automates a lot of friction for users. You get these messages on your phone and you get them on your desktop which is really an integrated chat experience with whatever device you’re using.”
The National Security Agency is monitoring a certain type of cookie – deployed by the search giant Google – as yet another tool in their increasingly public surveillance apparatus.
This, according to slides from an April 2013 NSA presentation acquired by the Washington Post, is the latest revelation from former National Security Agency contractor Edward Snowden.
The slides indicate that the NSA is monitoring the Google’s PREF cookie. The NSA is reportedly utilizing an analytics tool called HAPPYFOOT that aggregates leaked location data, in this case the PREF cookie. It is unclear exactly how the NSA’s HAPPYFOOT tool acquires these PREF cookies, though the slides seem to suggest that the spy agency may be exploiting a data leak vulnerability of some sort. However, the Washington Post reports that the NSA may be acquiring these cookies with Foreign Intelligence Surveillance Act court orders.
The slides also reveal that the NSA has partnered with National Geospatial-Intelligence Agency, and the Washington Post reports that the two groups are using these PREF cookies to determine the locations of surveillance targets in order for the NSA to perform remote spying operations.
Cookies are small pieces of data that companies send from their websites and install on the browsers’ of the individuals visiting their websites. When a user revisits one of these sites, that user’s browser sends the cookie back, and the server handling the site then recognizes the browser of the user.
A Wall Street Journal article from February 2012 examined the discovery of the PREF cookie by a man named Stephen Frankel. Frankel’s case was particularly odd because he observed the cookie present in his Safari browser despite the fact that he had blocked all tracking cookies and – even odder yet – had not visited any sites in his Safari browser.
The Journal reported that the PREF cookies primarily serve Google’s Safe Browsing malware protection feature.
Wall Street Journal technological consultant, Ashkan Soltani, noted that the cookie – despite not being an advertising cookie – contains a unique identification number and can not be disabled without disabling Google’s phishing and malware protection feature. Basically what is happening, Soltani explained, is that other browsers are periodically pinging Google for updated lists of dangerous sites. In turn, Google responds by installing this PREF cookie on user machines. This is how the cookie ended up in Frankel’s unused Safari browser.
Of course, the PREF cookie serves another purpose as well, and this other purpose seems to be that which the NSA is exploiting. On a Google policies and principles page that had to be translated from Spanish, the company notes that the PREF cookie gives Google the ability to determine user locations so that Web-content is displayed in the user’s preferred language. Per Google’s explanation, the cookie also grants location data to certain sites that want to display location-sensitive content like local news, traffic, and weather reports.
The PREF cookie may appeal to the NSA because of these characteristics. Namely that it seems to be innocuous if not beneficial, that it works when all other cookies are blocked, is present even on unused browsers, and also has the capacity to collect location data.