The appeal of WhatsApp, the cross-platform mobile messaging app recently acquired by Facebook for a stunning $19 billion price tag, was that it kept to its promise of not collecting user information that would be converted to ad revenue.
The acquisition by Facebook, however, likely changes that dynamic, and that worries consumer privacy advocates. Two such groups filed a complaint this week with the U.S. Federal Trade Commission requesting an investigation and possibly an injunction temporarily blocking the acquisition.
The Electronic Privacy Information Center (EPIC) and the Center for Digital Democracy (CDD) filed the complaint recently, stepping up on behalf of WhatsApp’s hundreds of millions of active users. The complaint said Facebook has made it clear it will incorporate WhatsApp user data into its business model, and that’s something users didn’t sign up for.
“The proposed acquisition will therefore violate WhatsApp users’ understanding of their exposure to online advertising and constitutes an unfair and deceptive trade practice,” the complaint said.
The concern is that Facebook will be able to construct complete profiles on WhatsApp users, most of whom are likely already among Facebook’s 1.2 billion subscribers. WhatsApp users who regard the privacy promises made by the app could not be subject to intrusive targeted advertising which is the heart of Facebook’s revenue model. Facebook, meanwhile, has established precedent with past acquisitions, including Instagram in 2012, where it changes existing privacy policies and terms of service to indeed collect user data.
In backing up its claims of deceptive trade practices, EPIC and the CDD point out that WhatsApp users expect a “privacy-protective messaging service” and could not have anticipated their data would be subject to Facebook’s data collection and mining practices, the complaint said.
EPIC formally asked the FTC to investigate the acquisition on these grounds, in particular concerning Facebook’s ability and intent to access WhatsApp users’ mobile phone numbers and metadata. It also asked that until the investigation is completed that the acquisition be halted.
“In the event that the acquisition proceeds, order Facebook to insulate WhatsApp users’ information from access by Facebook’s data collection practices,” the complaint said.
According to Reuters, Facebook said in a statement that WhatsApp will operate as a separate company and will honor its privacy and security commitments.
One thing that’s been made abundantly clear by mathematicians and cryptographers alike is that despite the NSA’s dragnet surveillance of phone calls and Internet traffic, the spy agency has not been able to crack the math holding up encryption technology.
Those who wish to spy and steal on the Internet continuously hit a wall when it comes to crypto algorithms, leaving no alternative but to find a way to subvert the technology in order to reach their targets.
In response, security and privacy experts, as well as cryptographers, have urged companies to turn HTTPS on by default for web-based services such as email and social networking. A group of researchers from UC Berkeley, however this week published a paper, that explains new attacks that aid in the analysis of encrypted traffic to learn personal details about the user, right down to possible health issues, financial affairs and even sexual orientation.
The paper “I Know Why You Went to the Clinic: Risks and Realization of HTTPS Traffic Analysis” builds on previously successful research on SSL traffic analysis, Tor and SSH tunneling exposing vulnerabilities in HTTPS leading to precise attacks on the protocol that expose sensitive personal information.
The researchers—Brad Miller, Ling Huang, A.D. Joseph and J.D. Tygar—developed new attack techniques they tested against 600 leading healthcare, finance, legal services and streaming video sites, including Netflix. Their attack, they said in the paper, reduced errors from previous methodologies more than 3 ½ times. They also demonstrate a defense against this attack that reduces the accuracy of attacks by 27 percent by increasing the effectiveness of packet level defenses in HTTPS, the paper said.
“We design our attack to distinguish minor variations in HTTPS traffic from significant variations which indicate distinct traffic contents,” the paper said. “Minor traffic variations may be caused by caching, dynamically generated content, or user-specific content including cookies. Our attack applies clustering techniques to identify patterns in traffic.”
Using the techniques presented in the paper, an attacker could learn much more about a user’s activity only than just the IP address of the website they’re visiting; specific pages on the site can now be deduced with greater accuracy than previous work, the researcher said.
The paper points out a number of privacy consequences as well beyond government surveillance. For example, enhanced SSL traffic analysis by an ISP can lead to be enhanced customer data mining and intrusive targeted advertising. Employers can also more effectively monitor employees’ traffic and the techniques can also improve the censorship efforts by oppressive regimes, putting the liberties of privacy advocates at risk.
The attacks were tested on a number of heavily visited websites, including the Mayo Clinic, Kaiser Permanente, Planned Parenthood, Wells Fargo, Bank of America, Vanguard, Legal Zoom, the ACLU, Netflix and YouTube. The researchers established a baseline by visiting webpages on the respective sites and recording subtle changes to the URLs, especially those brought upon by browser cookies and caching that affect packet sizes for internal pages compared to homepages that are much more highly trafficked.
The researchers said that their techniques, conducted against more than 6,000 webpages, were able to accurately identify internal pages and information 89 percent of the time on average.
The paper also presents a possible defense against these attacks, which the researchers called Burst, which they demonstrate reduces attack accuracy by 27 percent. The paper said the technique operates between the application and TCP layers and is able to obscure high level features of traffic.
“The Burst defense outperforms defenses which operate solely at the packet level by obscuring features aggregated over entire TCP streams,” the paper said. “Simultaneously, the Burst defense offers deployability advantages over techniques such as HTTPS since the Burst defense is implemented between the TCP and application layers.”
While the Target data breach may be in the rear view mirror, research this week shows it’s clear that many attackers are still using point of sale malware, namely Dexter and Project Hook, in active attacks.
Researchers at Arbor Networks’ Security Engineering & Response Team (ASERT) looked at several such campaigns, exfiltrated data dumps and decoded them to analyze the scope of their compromises. The group also analyzed network activity triggered by Dexter malware samples.
According to Arbor’s Threat Intelligence Brief 2014-3 released yesterday, researchers noticed a specific variation of Dexter, Dexter Revelation, exfiltrating stolen data, stored in fake .zip files and .txt files – via FTP credentials – from compromised terminals.
Revelation was one of three Dexter variants (along with Stardust and Millennium) that ASERT noticed in December but at that time it was unclear just how the infections were happening.
While researchers were under the assumption that Revelation was a fairly new brand of malware, new research has traced developmental versions of the malware back almost a year, early builds date back to April 2013.
It turns out the Revelation malware has several handy functions it uses including using a memory scraping procedure that “scours system memory looking for plaintext data that matches a credit or debit card format” and a keylogger function it uses to “capture keyboard activity and other system information.” The fake .zip files store a four-byte XOR key that can actually be used to decode the file’s contents.
The report suspects a threat actor going by either “Rome0″ or “rome0″ is directly involved with Dexter. Researchers say they’ve noticed actors going by both of the usernames demonstrating their familiarity with banking Trojans online and frequenting various carding forums.
ASERT posted a list of IP addresses and hostnames associated with Dexter’s command and control activity in the report that it’s hoping organizations review.
“Organizations are encouraged to check logs and other indicators of network activity associated with these IP addresses and/or hostnames to find systems compromised as part of a past or current attack campaign.”
The IP addresses listed in red indicate that the C&C servers associated with them were still active as of the report.
While Project Hook, another point-of-sale malware, is less active than Dexter, researchers are still encouraging organizations to remain vigilant especially after they found a special URL set up hosting back-end panels for Project Hook and another PoS malware: Alina, in January and early February.
Arbor’s report came out the same day that Target announced it would finally overhaul its information security processes and that it’s chief information officer, Beth Jacob, had resigned.
Target reports that it will fill the position with an external hire as well as assign a new role: chief compliance officer.
“Target will be conducting an external search for an interim CIO who can help guide Target through this transformation,” Target’s Chairman, President, and CEO Gregg Steinhafel said Wednesday.
The transformation Steinhafel is referring to is the stress the U.S. retailer has undoubtedly had to grapple with after suffering a massive breach in November. Attackers were able to set up a command and control server and lift more than 40 million credit and debit card records and 70 million other records of customer details from Target point of sale systems.
We may be three months removed from the Target fiasco but point-of-sale malware campaigns continue to permeate the headlines.
Texas-based Sally Beauty Supply, a chain with around 2,700 locations nationwide, confirmed yesterday that someone attempted to breach its system but would not confirm that customer data was at risk. According to Krebs on Security a batch of 282,000 stolen credit card numbers popped up on an underground market and three banks purchased their of their customers cards in hopes of finding the theft’s origin. All of the banks then found that the cards they had gotten hold of had all been used at a Sally Beauty Supply store within 10 days before.
Microsoft will patch a lingering zero-day vulnerability in Internet Explorer next Tuesday, one of five bulletins it will release as part of its March 2014 Patch Tuesday security updates.
The IE 10 zero-day was disclosed close to a month ago when researchers at FireEye reported on Operation SnowMan, an espionage campaign that compromised the U.S. Veterans of Foreign Wars website. The attackers, experts said, were targeting the computers of active military personnel who visit the site seeking benefits information.
FireEye said a Flash exploit was used via an iFrame to trigger the use-after-free vulnerability in the browser. Compromised computers were hit with a remote access Trojan that stole data; experts speculate the attackers were hoping to gain steal military secrets from the active service members who use the site as a resource.
It was soon discovered that a second and unrelated group of attackers was also exploiting the IE 10 zero day, this time to impersonate a number of French aerospace companies, redirecting legitimate traffic to the hacker-controlled domains.
Researchers at Seculert said malware that changes host files on infected machines in order to add in these malicious domains had previously been the domain of pharming attacks used for fraud.
“This is the first time we have seen a malware change a host file for a purpose other than fraud perpetuated by pharming or for disabling access to specific websites,” Seculert CTO Aviv Raff said.
Microsoft had shipped a Fix-It mitigation for the zero-day as a stopgap until a patch was ready. Microsoft said IE 9 also contains the same vulnerability, but it was not being exploited. IE 11 users running the Enhanced Mitigation Experience Toolkit (EMET) were also protected against these attacks.
The IE update is one of two critical bulletins expected next week. The other is also a remote code execution vulnerability in Windows.
All five bulletins announced by Microsoft today affect versions of Windows or IE all the way back to Windows XP, which Microsoft will no longer support with security updates as of April 8.
“Windows XP is affected by all five updates and there is really no reason to expect this picture to change: Windows XP will continue to be impacted by the majority of vulnerabilities found in the WIndows ecosystem, but you will not be able to address the issues anymore,” said Qualys CTO Wolfgang Kandek. “You need a strategy for the XP machines remaining in your infrastructure. We are still seeing significant number of XP machines in our scans.”
The remaining three bulletins were rated “important” by Microsoft and include elevation of privilege vulnerability and security feature bypass issues in Windows and another security feature bypass issue in Silverlight.
“Of the remaining issues, one is an important privilege issue, probably going to be a kernel or kernel driver patch; never something to ignore but less important than a critical/remote issue,” said Ross Barrett, senior manager of security engineering at Rapid 7. “The other two are the seldom seen ‘security mechanism bypasses’, probably the same issue being patched in Windows and in Silverlight. We will have to wait and see how exploitable this turns out to be. If it turns out that some of these issues are in the wild and under exploitation, then that will be change the circumstances of what to prioritize.”
Silverlight, meanwhile, has relatively limited adoption and given Microsoft’s support of Flash in IE 11, it’s not out of the question it will be discontinued eventually, said Tyler Reguly, manager of security research at Tripwire.
“In a world filled with so many web technologies, vendors could better serve the public by simply limiting choice and removing dead weight,” Reguly said.
Alarm bells went off last August when spikes in Tor client downloads were traced to a large click-fraud and Bitcoin-mining botnet called Sefnit.
The malware was using the popular anonymity network to communicate with hackers in order to transmit stolen data and receive additional commands. In Sefnit’s case, the 600 percent increase in Tor usage it kicked off was also its downfall as Tor administrators noticed performance issues and steps were taken to strangle its activity.
Hackers’ use of Tor and other Darknet services is really nothing new, but incidents such as the Sefnit takedown that ensued as well as the disruption of the Silk Road drug and malware underground market that also operated over Tor shed more light on the practice.
For example, researchers have Kaspersky Lab have published research uncovering three different campaigns that use Tor as a host infrastructure for criminal malware activities: a 64-bit version of the Zeus Trojan that sends traffic through Tor and creates Tor hidden services to obscure the hackers’ location; Chewbacca, a Trojan that steals data from memory a la ram scapers, and communicates over Tor; and most recently an Android Trojan that uses a .onion domain as a command and control infrastructure.
Researcher Sergey Lozhkin, a senior researcher with Kaspersky Lab, said his work investigating criminals’ use of darknets turned up 900 Tor hidden services and 5,500 nodes.
“The possibility of creating an anonymous and abuse-free underground forum, market or malware C&C server is attracting more and more criminals to the Tor network,” Lozhkin said. “Hosting C&C servers in Tor makes them harder to identify, blacklist or eliminate.”
Lozhkin said Tor underground markets aren’t set up much differently than legitimate ecommerce sites; most include some sort of registration process, offer buyers ratings on traders, and familiar interfaces through which purchases are made. Criminals are selling everything from money laundering services, credit cards, skimmers, carding equipment and more. And most of it is sold using Bitcoin.
Yesterday, Microsoft published new details on Sefnit’s Tor components and configuration data, the domains it was in contact with and how it communicates over Tor.
After the August spike in Tor traffic alerted experts, Microsoft took steps to stop the botnet that were finally realized last Oct. 27 when it modified signatures sent through its update services that removed the outdated Tor client service installed by the malware. The Tor client service had a specific configuration that Microsoft identified, and despite some concerns that Microsoft was overstepping by possibly snaring some versions of Tor legitimately installed by users, the cleanup moved forward and Sefnit numbers dwindled.
The version installed with Sefnit was v0.2.3.25 and it did not automatically update, Microsoft said, leaving users exposed to a number of exploitable vulnerabilities. The Tor client was added as a Windows service on every computer infected by Sefnit and was configured to accept connections over ports 9050 and 9051; 9051 was used by Sefnit to obtain status information regarding its connection to Tor, while 9050 was used as a communication point for the malware’s SOCKS proxy. Any application configured to use a proxy server, Microsoft said, to communicate over Tor. Sefnit, Microsoft said, used this port to contact its command servers and bypass intrusion detection systems, and utilized Tor hidden services to obfuscate server locations.
The malware comes with a list of .onion domains that are drop points for stolen data. Microsoft said the list of C&C servers was found in file inside a random directory that is cryptographically generated. Within that directory is a file with a .ct extension that contains the victim’s IP address, a string that is likely a victim ID, a list of command and control domains, and a working directory of the malware, Microsoft said.
Microsoft said that at its peak in August 2013 there were an estimated four million Sefnit clients which began receiving commands; that number had dipped significantly by the end of December, leaving two million that could still be at risk for attack because of Sefnit-added Tor services that are outdated, Microsoft said.
Milan-based Hacking Team relies on servers in the United States and hosted by American companies to support its clients’ state-sponsored surveillance operations in some of the world’s most repressive regimes.
Hacking Team is an Italian security firm that develops surveillance equipment and sells it to foreign governments that allegedly turn around and use that equipment to spy on various targets. According to a new report from the University of Toronto’s Citizen Lab, in at least 12 cases, U.S.-based data centers contain servers that have some nexus in the infrastructure of foreign espionage.
The specific tool sold by Hacking Team is known as Remote Control System (RCS). According to the report, RCS has the capacity to spy on Skype conversations, email communications, and instant messaging services in addition to siphoning off passwords and local computer files.
Computers with RCS installed on them transmit surveillance data back to their controllers through a series of servers in third-party countries. Known as a proxy chain or circuit, the method – a type of data obfuscation – is commonly used to keep infected users and security experts from being able to accurately examine the sources of traffic. It is not altogether clear whether the proxy servers are established by Hacking Team or its government clients.
“For example,” the report says, “an infected target may discover that his computer is sending information to a server in Fremont, California, but would not be able to trace the ultimate destination of this information to Uzbekistan.”
In certain cases, the servers facilitating RCS masqueraded as ABC News, a small Oregon-based newspaper, and a New York financial services firm. The purpose of this was – of course – to further shield Hacking teams’ clients from exposure. Citizen Lab believes that these particular companies were chosen because the targets of RCS had some level of familiarity with them.
Obvious legal issues emerge when a foreign government is passing its surveillance traffic through servers in third-party countries. Citizen Lab suggests in its report that Hacking Team or its clients may choose the countries through which RCS traffic passes based on their laws. However, it’s also entirely possible that such actions represent a direct violation of the law in certain countries.
“The passage of RCS traffic through the US is not normal routing incident to benign electronic communications, but the purposeful use of US servers for the surreptitious transmission of wiretapped data to foreign governments. Whether moving such information through US-based communications facilities violates US law, including the Computer Fraud and Abuse Act and the Wiretap Act, deserves immediate attention.”
There is no evidence to suggest that the countries engaged in spying are seeking the approval of the countries in which their infrastructure is party hosted.
Citizen Lab is claiming that the surveillance apparatuses of Azerbaijan, Colombia, Ethiopia, Korea, Mexico, Morocco, Poland, Thailand, Uzbekistan, and the United Arab Emirates are enabled by companies located in a long list of American cities including Boston, Los Angeles, and Chicago. The hosting providers named in the report include GoDaddy, Linode, Sharktech, ColoCrossing, Endurance International Group, Internetserver, InMotion Hosting, NOC4 Hosts, HostDime, and OC3 Networks.
The U.S. contains the largest number of servers supporting RCS command and control communications, though it is by no means the only third-party country through which RCS data is bing routed. After the U.S., which has 114 RCS-related servers, Italy, the United Kingdom, Seychelles, and Japan are the largest hosters of Hacking Team servers.
There’s a serious security flaw in some of Cisco’s wireless routers that could allow a remote attacker to take complete control of the router. The bug is in a number of the Cisco small business routers, as well as a wireless VPN firewall.
Cisco has released patches to fix the vulnerability in its Wireless-N VPN Router family and the Wireless-N VPN Firewall. The company said that the flaw is due to a problem in the way that the routers handle authentication requests.
“A vulnerability in the web management interface of the Cisco RV110W Wireless-N VPN Firewall, the Cisco RV215W Wireless-N VPN Router, and the Cisco CVR100W Wireless-N VPN Router could allow an unauthenticated, remote attacker to gain administrative-level access to the web management interface of the affected device,” the Cisco advisory says.
“The vulnerability is due to improper handling of authentication requests by the web framework. An attacker could exploit this vulnerability by intercepting, modifying and resubmitting an authentication request. Successful exploitation of this vulnerability could give an attacker administrative-level access to the web-based administration interface on the affected device.”
Cisco says that it isn’t aware of ay public exploits for this vulnerability or any attack attempts against it yet.
Social networking site Meetup.com is finally back online today, yet officials at the site are warning it could still face future outages following a series of sustained distributed denial of service attacks (DDoS) over the weekend.
Meetup is a social networking portal that allows individuals with common interests to converse and convene. The 12 year old site boasts nearly 16 million users who host and attend almost 316,000 meetups a month.
According to a blog post on Monday by Scott Heiferman, the site’s co-founder and CEO, Meetup.com’s “tough weekend” started a day early on Thursday last week when the first of what would eventually be three DDoS attacks crippled their servers. The site remained offline for about 24 hours on Friday before going down again on Saturday at 4 p.m. Thanks to some security changes the Meetup.com team implemented in the meantime the company was able to get the site back up by midnight Eastern Standard Time that night to make the service’s apps and site available to users.
After a relatively calm Sunday the third DDoS attack hit at 8:09 EST that night. With site engineers working feverishly to restore the site’s elements, Meetup popped back online on Monday at 4:30 p.m. EST.
According to the Heiferman the attack was apparently preceded by an email that suggested the DDoS attack could have been mitigated had Meetup.com paid $300:
Date: Thu, Feb 27, 2014 at 10:26 AM
Subject: DDoS attack, warning
A competitor asked me to perform a DDoS attack on your website. I can stop the attack for $300 USD. Let me know if you are interested in my offer.
Heiferman claims the company agreed not to pay the amount stressing that Meetup didn’t want to negotiate with criminals and set a nasty precedent. The attack started “simultaneously” after.
“Payment could make us (and all well-meaning organizations like us) a target for further extortion demands as word spreads in the criminal world,” Heiferman wrote Monday, adding that if the attackers were simply low-balling them on the $300 the criminals might have just taken their money and then simply demanded more.
While experts agree that Meetup.com’s decision was ethical, some believe the site could have benefited from an established cyber-attack defense plan.
“Long before the demand for cash was made, attackers were likely probing the Meetup service, searching for vulnerabilities and preparing to launch an attack that would do the most harm,” said Ashley Stephenson, the CEO of Corero Network Security, a firm that specializes in DDoS prevention, on Tuesday.
A FAQ posted yesterday by Meetup about the outage assures its users that none of their information was accessed or stolen, the DDoS just made getting onto the site tricky, which in turn made difficult to do its job, let Meetup groups actually Meetup.
To repay its users Meetup.com is crediting all users classified as Organizers with seven extra days. Organizers are basically members of Meetup.com who pay a fee to use the service to find like-minded individuals and set up Meetups. The FAQ adds that any Organizers who were supposed to renew their accounts over the weekend but couldn’t as the site was down have had their renewal periods extended as well.
While Heiferman claims the company will continue to not pay the hackers, he does promise the site will “stable and reliable soon.”
Meetup claimed Tuesday it was still working to restore user functionality and working thru the email queue.
“It’ll take time,” the group tweeted Tuesday.
The similarities between the GnuTLS bug and Apple’s goto fail bug begin and end at their respective failure to verify TLS and SSL certificates. Otherwise, they’re neither siblings, nor distant cousins.
The GnuTLS bug is very different, though like Apple’s infamous goto fail error, it will also treat bogus digital certificates as valid.
“It allows someone to impersonate a trusted website, which as far as TLS/SSL goes is the attacker’s Holy Grail,” said cryptographer Matthew Green of Johns Hopkins University. “This one was more of a dumb coding mistake, whereas Apple could have been a cut-and-paste error. It looks like [GnuTLS] failed to cast a return variable correctly. C is hard.”
In both cases, an attacker with man-in-the-middle positioning can intercept traffic and introduce an invalid digital certificate, which because of shoddy coding errors in both software packages, will be checked off as legitimate.
While the goto command appears in the buggy code in both vulnerabilities, the GnuTLS bug veers off in a different direction. Goto fail, for example is a standard C paradigm for error handling. Goto, in this case, is being used correctly, said Melissa Elliott, a security researcher with Veracode. The problem, she said, is related to variable typing and an improper mixing of error codes that led to this mess.
Elliott said the faulty code snippet in question is supposed to return either a true or false variable depending on whether the certificate is valid; this paradigm is called Boolean return code. The GnuTLS bug, however, returns specific error codes identified by negative numbers, each signifying something different, she said.
“The mistake was that when one of these functions returned an error, it would be treated as though it were Boolean without changing the actual number. Under Boolean rules, anything that is not a zero is ‘true,’” Elliott said. “Hence, an error meant to indicate failure would be passed up the chain as ‘true’ (no error) instead of ‘false’ (error).”
The GnuTLS bug was uncovered during a recent audit requested by Linux distributor Red Hat. GnuTLS is the SSL and TLS implementation used by hundreds of software packages, including many flavors of Red Hat Enterprise Linux, and all Debian and Ubuntu distributions. Core crypto and mail libraries such as libcrypt and libmailutils, and cURL are affected. GnuTLS is not as widely implemented as OpenSSL, nor is it deployed on a mainstream iOS device, for example, but it is well deployed in Linux and open source applications.
Elliott said the GnuTLS bug is exploitable in ways other than a man-in-the-middle attack as is the case with Apple’s goto fail bug,
“For example, if you had someone else’s certificate stored on your personal computer, and a program tried to check that it was valid with reference to the locally trusted CAs (certificate authorities), it could receive the wrong answer,” Elliott said.
Johns Hopkins’ Green said insufficient code review and testing likely allowed the GnuTLS bug to slip through.
“This stuff is hard. Clearly people need to run their TLS implementations through test harnesses and tools that may not exist yet,” Green said, adding that decent TLS code scanners are lacking.
“It is distressingly easy to accidentally write a bug like this. It does not cause anything to crash. Full-featured C compilers can warn you about this bug, but the false positive rate (that is, instances where it can’t possibly do any harm) is high enough that most programmers are inclined to ignore them,” Elliott said. “Unfortunately, this is security-sensitive code, so the consequences of missing the one important warning in a list of benign ones can be catastrophic.”
GnuTLS issued an advisory, confirming the vulnerability and urges users to upgrade to the latest GnuTLS version 3.2.12 or 3.1.22 or to apply a patch for GnuTLS 2.12x.
Exploits bypassing Microsoft’s Enhanced Mitigation Experience Toolkit, or EMET, are quickly becoming a parlor game for security researchers. With increasing frequency, white hats are poking holes in EMET, and to its credit, Microsoft has been quick to not only address those issues but challenge and reward researchers who successfully submit bypasses to its bounty program.
The tide may be turning, however, if the latest Internet Explorer zero day is any indication. An exploit used as part of the Operation SnowMan espionage campaign against U.S. military targets contained a feature that checked whether an EMET library was running on the compromised host, and if so, the attack would not execute.
That’s not the same as an in-the-wild exploit for EMET, but that may not be too far down the road, especially when you take into consideration two important factors: Microsoft continues to market EMET as an effective and temporary zero-day mitigation until a patch is released; and the impending end-of-life of Windows XP on April 8 could spark a surge in EMET installations as a stopgap.
In the meantime, the EMET bypasses keep on coming. The latest targeted a couple of mitigations in the EMET 5.0 Technical Preview released last week during RSA Conference 2014. Researchers at Exodus Intelligence refused to share much in the way of details on the exploit, preferring to offer it to its customers before making it available for public consumption. A tweet from cofounder and vice president of operations Peter Vreugdenhil said: “EMET 5 bypassed with 20 ROP gadgets. ntdll only, esp points to heap containing fake stack, no other regs required. Adding to our feed soon.”
Vreugdenhil is a fan of EMET, and is in the camp that believes hackers will be adding EMET bypasses to exploits within a year or two, despite the EMET module in Operation SnowMan, which he believes was added in order to keep the campaign from being detected as long as possible.
“I think most of the reason is that the return on investment for the bad guys is really not that high at this point,” Vreugdenhil said. “That also means that by the time everybody actually uses [EMET] and the more ground it gains, the more likely it becomes that return on investment for the bad guys will be high enough for them to add it to their exploits.”
EMET provides users with a dozen mitigations against memory-based exploits, including ASLR, DEP, Export Address Table Filtering, Heapspray Allocation, and five return-oriented programming mitigations. ROP chains are the most effective bypass technique is use today, one that Vreugdenhil has used on a couple of occasions against EMET.
Writing exploits targeting EMET, he said, is a little more involved than targeting a vulnerability in third-party software such as Flash or Java. Vreugdenhil said he generally starts with a publicly available exploit such as the latest IE 10 zero day and observes the crash the bug causes in order to understand how it corrupts memory and hopefully discloses memory that can be used to build an ROP chain. Microsoft’s addition of Data Execution Prevention and ASLR in Windows Vista and Windows 7 prevents attackers from executing code in a particular memory location because those memory modules are now randomized.
“Back in Windows XP when there was no ASLR and no randomization of the modules, it was relatively easy. You would just pick a module and then reuse the code inside that module to still get code execution,” Vreugdenhil said. “Windows 7 came out and put the bar higher by shuffling the modules around, so theoretically, you didn’t know where your modules were in the process. It theoretically should be impossible to point at an address and say ‘Hey would you execute code at that address because I know there’s something going to be there.’”
If an attacker can force a process to leak memory from inside back to an exploit, the attacker will be able to reuse that information and bypass ASLR and DEP because he will know where the memory module is located, Vreugdenhil said. From there, an attacker needs to figure out additional memory protections in place, and address those to control the underlying system.
“In the case of EMET, there’s a long list of protection mechanisms it adds, there’s only two or three that could be a hindrance if you’re writing a client-side IE exploit. And so it’s usually just a matter of figuring out what they are and coming up with ways to sidestep them,” Vreugdenhil said. “If we can do it, we assume there’s many more people who can do it, and it’s also going to be used by the bad guys anywhere between now and a year or two years.”
GnuTLS, an open source SSL and TLS implementation used in hundreds of software packages including Red Hat desktop and server products and all Debian and Ubuntu Linux distributions, is the latest crypto package to improperly verify digital certificates as authentic. The vulnerability, discovered and reported yesterday by engineers at Red Hat, puts any site or application dependent on GnuTLS at risk for exploit.
“It was discovered that GnuTLS did not correctly handle certain errors that could occur during the verification of an X.509 certificate, causing it to incorrectly report a successful verification,” Red Hat said in an advisory issued Monday. “An attacker could use this flaw to create a specially crafted certificate that could be accepted by GnuTLS as valid for a site chosen by the attacker.”
The vulnerability has eerie similarities to a bug reported by Apple in its iOS mobile operating system and OS X for Mac computers. Now known as the goto fail bug, separate patches were issued for the vulnerability which removed SSL certificate checks from the respective operating systems.
“This really is as bad as it gets,” said Kenneth White, a security expert and principal scientist at Social & Scientific Systems in North Carolina. “An attacker can trivially forge any arbitrary domain and make it appear authoritative and trusted to the requestor. So, not only interception of sensitive channels, but [also] potentially subverting the trusted package signature process as well.”
White estimates there are more than 350 packages that rely on GnuTLS crypto libraries; in addition to popular Linux distributions, core crypto and mail libraries such as libcrypt and libmailutils, and cURL are affected.
“cURL (libcurl3-gnutls), in turn is used by the package updating system both for OpenPGP (gnupg2 and gnupg-curltransport), as well as the system package updater itself (apt-transport-https),” White said. “But what is especially difficult, is understanding the myriad downstream dependencies, such as XML parsers, etc. In general, Debian & Ubuntu have eschewed OpenSSL for license reasons, so there actually exist Nginx and Apache installs that use gnutls as well.”
GnuTLS issued an advisory, confirming the vulnerability and that it was discovered during an audit of GnuTLS for Red Hat. It urges users to upgrade to the latest GnuTLS version 3.2.12 or 3.1.22 or to apply a patch for GnuTLS 2.12x. Red Hat Enterprise Linux Desktop, HPC Node, Server and Workstation v 6, as well as Red Hat Enterprise Linux Server AUS and EUS v 6.5 are affected, Red Hat said in its advisory.
The recent Apple bug brought this issue to the forefront. Apple released a patch on Feb. 21 for iOS and days later for OS X. An attacker with man in the middle positioning on a network could present an invalid certificate that would pass checks normally designed to reject such a cert. The attacker would then be able to monitor communication and network traffic thought to be secure.
Critical infrastructure policymakers are advocating the foundation of a new entity, the Institute for Electric Grid Cybersecurity, along with a new set of guidelines, to better protect the North American electric grid from cyber-attacks and determine how to respond if the grid is ever compromised.
The initiative was described in a new report (.PDF) issued by the Bipartisan Policy Center. The report was authored by a handful of officials from across the industry, including former National Security Agency and C.I.A. Director Gen. Michael Hayden.
Hayden appeared on a panel last Friday to discuss the paper at the Bipartisan Policy Center in Washington, D.C. where the rest of the report’s authors discussed their recommendations. The group is largely encouraging government agencies and private entities to strengthen the system that’s in place before it’s inevitably attacked.
Calling it a domain that favors the attacker, Hayden called the threats “almost self-evident,” before going on to reference the adversaries who want to “degrade, disrupt, deny, destroy” networks and the hackers out there responsible for “recreational espionage.”
Hayden was joined by Curt Hébert, a partner at Mississippi law firm Brunini, Grantham, Grower & Hewes, Paul Stockton, a Managing Director at economic advisory firm Sonecon, and Scott Aaronson, the Senior Director of National Security Policy for the Edison Electric Institute.
Throughout Friday’s panel, the men made several references to the staggering $6 billion costs that were attributed to the 2003 Northeast blackout of August 2003. While that blackout was ultimately blamed on an errant tree branch, that idea, the concept of a multiday outage, is a spectre that still looms over the electrical grid.
Hébert, who formerly chaired the Federal Energy Regulatory Commission, at one point called electricity “the most critical of the infrastructures,” mentioning the blackout and the difficulties associated with restoring sectors like telecom and healthcare. Hébert insisted that the industry has to do a better job understanding the need for mandatory standards, adding that while the NERC has done a good job working with the federal regulatory commission, there are still risks that need to be mitigated.
Hébert described the new organization, claiming it would need to be independent and tackle security from a holistic angle to ensure that everything “from the burner tip all the way down to the point that the kilowatt is actually given to the consumer” is protected.
The industry organization, tentatively titled the Institute for Electric Grid Cybersecurity is only mentioned twice in the 76-page report but the group claims that it could be loosely modeled on the Institute of Nuclear Power Operations, a group started in the wake of the accident at Three Mile Island in 1979 and involve “power sector participants” from across North America.
Those participants would ideally include local distribution utilities – there are 3,200 nationwide that delivery electricity – large generators and state utility regulators.
Still though Hayden acknowledged that to get a change to come, everyone would have to assume responsibility, most importantly the government.
“This cannot be done with just good will and executive action; it’s going to require Congress to actually face these issues and make some decisions that provide some legislative structure in terms of protection and responsibility that makes this more possible than it is today,” Hayden said.
The case for congressional action is clearly laid out in the report with one part recommending the Department of Energy allocate funds “to fully evaluate and understand systemic cyber risks” and “help regulators better evaluate the potential impacts of cyber attacks and provide needed context for weighing the benefits of utility investments in cybersecurity.”
“What permeates the report is that you can’t win this just defending the perimeter, you can’t win this with just prevention and defense ,” Hayden said.
“It’s the concept of resilience, what happens after things start to go wrong?”
As Matthew Wald, an energy reporter with the New York Times who moderated Friday’s panel reminded the audience, that’s exactly whats happening.
Things are going wrong.
Wald noted in the panel’s introduction that of the 250+ incidents reported to the Department of Homeland Security last year, two-thirds of them targeted the energy sector and grid.
One of the bigger problems with the grid came last year when two engineers Adam Crain and Chris Sistrunk discovered a vulnerability in an electrical communication protocol that’s widely used across the country. That vulnerability opened the floodgates and later led to a 20-page report “replete with vulnerabilities in 16 different system vendors.” According to a New York Times article from an October briefing the vulnerabilities, they affect a number of supervisory control and data acquisition systems (SCADA) and if used at a single, unmanned power substation, the vulnerabilities could result in “a widespread power outage.”
A team of researchers has published a paper that explains a number of attacks against websites and Web-based applications running TLS. The researchers’ techniques do not exploit implementation errors, the most common attack vector against encryption securing online communication, instead focus on exploiting features of the protocol that include session resumption followed by client authentication during session renegotiation.
The paper, called “Triple Handshakes and Cookie Cutters: Breaking and Fixing Authentication over TLS,” describes in detail how an attacker can use a man-in-the-middle attack to successfully impersonate a TLS client in attacks against TLS renegotiations, wireless networks, challenge-response protocols and channel-bound cookies.
Written by Karthikeyan Bhargavan, Antoine Delignat-Lavaud and Alfredo Pironti of the Prosecco research team at INRIA Paris-Rocquencourt, C’edric Fournet at Microsoft Research, Cambridge, and Pierre-Yves Strub of the IMDEA Software Institute, the paper demonstrates how an attacker could force a client running TLS to connect to an attacker-controlled server with an authenticated credential. The attacker’s server will then be able to impersonate the client at another server accepting the same credential, via single sign-on, for example.
“Concretely, the malicious server performs a man-in-the-middle attack on three successive handshakes between the honest client and server, and succeeds in impersonating the client on the third handshake,” the researchers wrote.
The researchers said their attacks work against leading browsers, VPN applications, and HTTPS libraries; different takes on the attacks that do not rely on renegotiation, for example, can enable spoofing of other TLS authentication such as PEAP, SASL and Channel ID.
“Our attacks exploit a lack of cross-connection binding when TLS sessions are resumed on new connections,” the researchers wrote. “Moreover, our attacks do not require an active network adversary but can be mounted only with a malicious server or website.”
The researchers dug into four TLS weaknesses, starting with a problem in the RSA handshake that enables impersonation via an unknown key-share attack, as well as another weakness in the Diffie-Hellman Exchange handshake where an attacker can use a man-in-the-middle attack between the client and server to steal sessions sharing the same keys, a different take on the same unknown key-share attack.
Session resumption on a new connection exhibits another weak link exploiting the fact that it uses an abbreviated handshake that can be forwarded between connections and accepted because it does not re-authenticate the client and server identities.
The fourth TLS issue happens during renegotiation, the researchers said, where the server and client certificates can change and applications are not properly instructed how to deal with changes and may not implement the best cert for the situation.
The researchers disclosed the vulnerabilities to a number of vendors, including the major browser vendors Apple, Google, Microsoft and Mozilla, all of which implemented a patch or some mitigation. OpenSSL, GnuTLS and GNU SASL said mitigations are pending.
Google has fixed 19 security flaws in its Chrome browser, including more than a dozen high-risk bugs. The company paid out $3,500 in rewards to security researchers who reported flaws.
Two of the high-risk vulnerabilities fixed in Chrome 33 are use-after-free flaws, one in SVG images and the other in speech recognition. There’s also a heap buffer overflow in the software rendering. The full list of flaws that earned rewards from Google:
[$1000] High CVE-2013-6663: Use-after-free in svg images. Credit to Atte Kettunen of OUSPG.
[$500] High CVE-2013-6664: Use-after-free in speech recognition. Credit to Khalil Zhani.
[$2000] High CVE-2013-6665: Heap buffer overflow in software rendering. Credit to cloudfuzzer.
 Medium CVE-2013-6666: Chrome allows requests in flash header request. Credit to netfuzzerr.
In addition to the bugs found by external researchers, Google’s internal security team also found a large number of bugs that were fixed in this release. Google’s researchers found 11 high-risk bugs and four medium-risk vulnerabilities.
Verizon updated its transparency report yesterday, breaking down National Security Letter and Foreign Intelligence Surveillance Act (FISA) orders for the first and second halves of 2013.
The telecommunications giant released its first transparency report in late January, responding to pressure from privacy advocates to publish law enforcement and government requests for data in the wake of the Snowden leaks. AT&T has also published its first transparency report, though both AT&T and Verizon lag behind Internet companies such as Google, Facebook and Twitter that have been sharing data for more than a year.
Verizon’s update doesn’t radically stray from the numbers it shared in January; during the first six months of 2013, the telecom received between 0-999 National Security Letters and 0-999 FISA Orders for content and customer information. The same range of requests was true for the period between July 1 and Dec. 31, the company said.
This is the first time Verizon reported on FISA orders since a Department of Justice ruling eased a gag order on companies that prevented publishing of such data. The ruling was a concession after months of lobbying and lawsuits from Internet companies requesting greater transparency and to dispel the notion that they were complicit with government snooping on users by providing intelligence agencies and law enforcement direct access to company servers.
Now companies are allowed two reporting options, in ranges of 0-999 or specific numbers up to 250 requests and then in ranges of 250 thereafter. The DOJ also requires a six-month quiet period on FISA order requests.
“We welcome greater transparency in this area by telecommunications and internet companies, in the absence of broader information by the government collecting the data,” said Verizon general counsel Randal Milch. “We once again call on all governments to make public the number of demands they make for customer data from such companies, because that is the only way to provide the public with an accurate data set.”
Verizon also reported that between 4,000 and 4,999 customer selectors were targeted by National Security Letters and FISA Orders for content and user information. Selectors, Verizon said, are customer identifiers used by the company and that identifier is generally a phone number.
“The number of selectors is generally greater than the number of customer accounts,” Verizon said in a statement. “An NSL might ask for the names associated with two different telephone numbers; even if both phone numbers were assigned to the same customer account, we would count them as two selectors.”
Verizon reported on Jan. 22 that it also received 36,000 warrants requesting location information or stored content data from its landline, Internet and wireless services; the court-ordered location data requests, Verizon said, are growing in frequency every year.
“Verizon only produces location information in response to a warrant or order; we do not produce location information in response to a subpoena,” Verizon said in a statement at the time. Of the 35,000 requests, 24,000 were through court orders and the rest through warrants. It also received about 3,200 warrants or court orders for “cell tower dumps” where under a warrant or court order Verizon was compelled to identify the phone numbers of all phones that connected to a specific cell tower during a given period of time.
As seemingly every new gadget and electronic device is coming retrofitted with an Internet connection these days – appliances, cars and medical devices a few chief examples, the floodgates have opened ever wider for an alarming number of new attack vectors.
The burgeoning evolution of “Internet of Things,” (IoT) as the concept has colloquially become known over the last few years, has prompted Cisco Systems to issue a challenge to programmers to address these security issues before they go on to become bigger problems.
In what its dubbed the Internet of Things Security Grand Challenge the company is offering up to $300,000 in prize money to members of the global security community who propose the best practical security solutions “across the markets being impacted daily by the IoT.”
Cisco Security Group Senior VP Chris Young explains the contest of sorts on the company’s main blog, writing that $50,000 to $75,000 will be awarded to up to six recipients. According to the challenge’s site, the deadline for submissions will be June 17 and the winners will be announced at the company’s second annual Internet of Things World Forum in Barcelona, Spain later this year.
Young notes that proposals will be based on four criteria:
- Feasibility, scalability, performance and ease-of-use
- Applicability to address multiple IoT verticals (manufacturing, mass transportation, healthcare, oil and gas, smart grid, etc.
- Technical maturity/viability of proposed approach
- Proposers’ expertise and ability to feasibly create a successful outcome
“As our connected lives grow and become more richer, the need for a new security model becomes even more critical,” Young wrote.
The security instabilities of cars and medical devices have been made clear over the past several years. In 2013 researchers Chris Valasek and Charlie Miller published a thorough paper describing how they were able to hack some Ford and Toyota brand cars to control the steering, braking and other functions while they were driving. Meanwhile the Food and Drug Administration urged medical device manufacturers to take security more seriously last year, handing down a series of suggestions intended on shoring up often vulnerable devices like insulin pumps, pacemakers, and defibrillators.