The National Cybersecurity and Critical Infrastructure Protection Act of 2013 would amend the Homeland Security Act of 2002 to better protect the country against potentially destructive cyber attacks targeting national utilities and other critical infrastructure systems.
The House Subcommittee on Cybersecurity, Infrastructure Protection, and Security Technologies has marked up and passed the bill back to the House Committee on Homeland Security. From here, H.R. 3696 will travel to the House floor for debate and an eventual vote. Should it pass in the House, it will proceed to the Senate and eventually the Oval Office.
Outside the Capitol, the American Civil Liberties Union, American Chemistry Council, Boeing Company, and National Defense Industrial Association are among the long list of strange bedfellows expressing support for the pending legislation.
In general, the bill seeks to establish a threat-information-sharing partnership between the Department of Homeland Security and the owners and operators of the nation’s critical infrastructure systems. It also establishes a framework through which the DHS can work with international partners to harden the security of systems outside the U.S. but upon which American citizens depend.
More specifically, the bill calls on the Secretary of Homeland Security to facilitate efforts to fortify and maintain a secure, functioning, and resilient critical infrastructure. Part of his responsibility will be to ensure that the handlers of infrastructure receive actionable, industry specific cyber threat intelligence in real time.
The bill – should it become law – will also call on the secretary to work with private partners to help develop and allocate funds for voluntary security and resiliency strategies. Should an attack occur, the bill would require the DHS to assist in incident response-related activities should critical infrastructure companies request such help.
The bill also opens an avenue through which infrastructure handlers can request help from the government in finding and mitigating threats and vulnerabilities. The secretary would also be required to provide more general security educational training to handlers upon request.
Beyond these requirements and the technical minutiae fills out the rest of the bill’s text, the bill mandates that the DHS educate the broader public on the importance of securing information systems.
“H.R. 3696 strengthens our cyber defenses by bolstering and providing oversight of DHS’s cybersecurity mission, fostering collaborative public-private partnerships, while also ensuring privacy and civil liberties are protected,” the bill’s sponsors wrote. “We are greatly encouraged by the strong bipartisan support of the NCCIP Act, as well as the many endorsements it has received from both industry and privacy advocates, and we look forward to moving this legislation to the House floor.”
To that effect, the ACLU read the bill and gave it their stamp of approval, stating that “information sharing provisions in this bill do not undermine current privacy laws.”
The ACLU endorsed the bill further:
“Unlike H.R. 624, the Cyber Intelligence Sharing and Protection Act (CISPA), your bill does not create broad exceptions to the privacy laws for cybersecurity. Instead, it strengthens private-public partnerships by supporting existing Information Sharing and Analysis Centers and Sector Coordinating Councils and reinforces voluntary sharing under current statutes that already provide for many cybersecurity scenarios.”
In a letter expressing its support for the bill, the Boeing Company noted that it is constantly challenged cyber attacks that are increasing both in number and sophistication. H.R. 3696, a company spokesperson wrote, will strengthen and focus efforts as the government works in partnership with the private sector to increase defensive capabilities.
You can can read the subcommittee’s one-page explainer, broadly outlining the terms and scope of the bill, by clicking the image above.
The code disclosure is in response, said developer Tal Ater of Israel, to Google’s decision not to release a patch for the vulnerability after acknowledging to him it was a problem.
Ater wrote on a post to his personal website that he reported the issue to Google on Sept. 13 and 11 days later the company informed him that a patch was ready; he soon learned he was also eligible for a $30,000 bounty as part of the Chromium Reward Panel.
More than a month later, however, Ater said Google had yet to release the patch and told him that the issue was mired with the W3C standards organization. The W3c, in November, updated its Web Speech API Specification and indications are that the behavior may be in line with the standard.
“The security of our users is a top priority, and this feature was designed with security and privacy in mind,” a Google spokesperson said via email.
In a demo, above, Ater’s exploit begins with a Chrome user engaging with a malicious website using the browser’s speech recognition capabilities. The exploit depends on a user giving the website permission to use the microphone. The site developed for the demo is a to-do list app, and once the user is done interacting with the list, the command is given to shut off the microphone. Chrome’s flashing red dot in the browser tab disappears leading the user to think speech recognition is off.
But the exploit proves just the opposite is true.
“As long as Chrome is running, nothing that is said next to your computer is private,” the demo said.
The demo continues and the user has closed the site authorized to use speech recognition and has moved on to another website. No indication is showing that audio is being recorded, however the browser is listening, Ater said.
A hidden pop-under, disguised as a banner advertisement, is revealed that is capturing the text of the audio, sending it to Google where it is automatically analyzed and sent back to the malicious site, Ater said. In the current version of Chrome, however, Google has fixed the code and now forces pop-under ads to appear on top of the window being viewed.
“What you see here essentially turns Google Chrome into an espionage tool,” the demo said. “It compromises your privacy in your office or your home, even when you’re not using the computer. Anything said within earshot of your computer can be captured by malicious parties.”
Ater said the exploit can be programmed to stay dormant and activate only when certain keywords are said. He also said that while most sites that use speech recognition do so over HTTPS, Chrome will still remember that a user granted the site permission to use the microphone and allow it to start listening once the user visits again. With Ater’s exploit, the indicator light in Chrome will not flash and the user will not know they’re being eavesdropped.
“When you click the button to start or stop the speech recognition on the site, what you won’t notice is that the site may have also opened another hidden pop under window. This window can wait until the main site is closed, and then start listening in without asking for permission,” Ater wrote on his site. “This can be done in a window that you never saw, never interacted with, and probably didn’t even know was there. To make matters worse, even if you do notice that window (which can be disguised as a common banner), Chrome does not show any visual indication that speech recognition is turned on in such windows – only in regular Chrome tabs.
Remote code execution bugs are the gold nuggets of security research. They’re the ones that researchers stay up all night looking for and they’re the kind of vulnerabilities that often are worth big money, whether it’s from a vulnerability broker, a government agency or a bug bounty program. For Reginaldo Silva, when he came across a serious vulnerability in the OpenID module in Drupal, he wasn’t sure right away exactly what he had or how valuable it was, so he reported it and later received a $500 bounty from Google, which uses OpenID. Only later did he realize it might have a much broader impact, and that’s how he ended up with a much, much bigger bounty from Facebook.
Silva, a computer engineer from Brazil, said he was messing around with Drupal back in September 2012 and ended up discovering a problem with the way Drupal handled OpenID. The vulnerability was an XML external entity expansion bug that allowed an attacker to read any file on a filesystem and take some other malicious actions. He reported the bug and it went into the CVE system, but a few days later he started thinking about how widely used OpenID is. He tested some Google properties and found that AppEngine and Blogger were both vulnerable and got a $500 bounty for his trouble.
But Silva kept looking around and remembered that Facebook allowed OpenID logins, but couldn’t find a way to enter an arbitrary OpenID URL, so he figured the site wasn’t vulnerable.
“So for more than a year I thought Facebook was not vulnerable at all, until one day I was testing Facebook’s Forgot your password? functionality and saw a request to https://www.facebook.com/openid/receiver.php,” Silva wrote in a blog post explaining his find.
“That’s when I began to suspect that Facebook was indeed vulnerable to that same XXE I had found out more than a year ago. I had to work a lot to confirm this suspicion, though. Long story short, when you forget your password, one of the ways you can prove to Facebook that you own an @gmail.com account is to log into your Gmail and authorize Facebook to get your basic information (such as email and name). The way this works is you’re actually logging into Facebook using your Gmail account, and this login happens over OpenID. So far, so good, but this is where I got stuck. I knew that, for my bug to work, the OpenID Relying Party (RP – Facebook) has to make a Yadis discovery request to an OpenID Provider (OP) under the attacker’s control. Let’s say http://www.ubercomp.com/. Then my malicious OP will send a response with the rogue XML that will then be parsed by the RP, and the XXE attack will work.”
Silva continued working on the Facebook login and eventually found a way to trigger the bug. But he couldn’t find a way to read any files on the vulnerable system, until he realized he had a small bug in his own code. Once he fixed that, he was in he clear.
“That’s right, the response contained Facebook’s /etc/passwd. Now we were going somewhere. By then I knew I had found the keys to the kingdom. After all, having the ability to read (almost) any file and open arbitrary network connections through the point of view of the Facebook server, and which doesn’t go through any kind of proxy was surely something Facebook wanted to avoid at any cost,” he wrote.
Silva wanted to escalate the bug to a remote code execution vulnerability, though, and kept working on the problem. However, he also wanted to make sure he played by the rules of Facebook’s bug bounty program, so he reported the XXE flaw and asked for permission to continue working on elevating it to a RCE flaw. That initial report sent the Facebook security team into full-on quick response mode immediately.
“In November, we were reading through incoming bug reports and came across a claim we wanted to investigate right away: arbitrary file reads. The report was well written and included proof of concept code, so we were able to reproduce the issue easily. After running the proof of concept to verify the issue, we filed an urgent task—triggering notifications to our on-call employees,” the Facebook security team explained in its account of the incident.
The team implemented a short-term fix in one line of code immediately and then set about trying to figure out how to push it to all of its Web servers. Once that was handled, the team looked for any associated issues and tried to determine whether there was a better long-term patch.
“After debugging, we concluded that libxml_disable_entity_loader(true) was indeed the correct final fix. Because we want to leave the code in a better state than we found it (rewrite old code, write tests, etc), writing the long term fix is often the step in the lifecycle of a bug that takes the longest. We wanted this line to run before anything else, so we put it in the lowest level of the callstack in our request initialization code,” Facebook said.
While this was happening, Silva was at lunch thinking about what he would do to escalate the vulnerability to RCE. When he returned, he discovered that Facebook already had implemented its fix.
“Needless to say, I was very impressed and disappointed at the same time, but since I knew just how I would escalate that attack to a Remote Code Execution bug, I decided to tell the security team what I’d do to escalate my access and trust them to be honest when they tested to see if the attack I had in my mind worked or not. I’m glad I did that. After a few back and forth emails, the security team confirmed that my attack was sound and that I had indeed found a RCE affecting their servers,” Silva said.
The Facebook security team realized the severity of the flaw and was considering a major bounty for Silva. They settled on a formula that averaged the recommended bounties from several of the company’s program administrators and came up with the final figure: $33,500. That’s one of the higher bounties paid by any of the major vulnerability reward programs, outside of the special bounties that Google sometimes pays in its Pwnium contest or Microsoft pays for mitigation bypasses.
“Plus, and more importantly, I get to brag I broke into Facebook… Nice, huh?” Silva said.
Suits and Spooks Collision DC 2014 wrapped up this week, and I had the opportunity to speak on two panels at the event, http://www.suitsandspooks.com/2014/01/dc-2014/ "Exploiting End Points, Devices, and the Internet of Things", and "Is the Cloud and Virtualization an Attackers Dream or Nightmare?".
A new strain of Android malware has been spotted that masquerades as an Android security app but once installed, can steal text messages and intercept phone calls without the device’s owner being any the wiser.
Dubbed Android.HeHe, the malware has six variants according to a blog post yesterday by Hitesh Dharmdasani, a mobile malware researcher with FireEye.
The malware apparently comes disguised as a security update (“Android Security”) for the phone’s operating system and once it’s set in place, it contacts the command-and-control server and conducts surveillance on incoming SMS messages. The command-and-control server responds with a list of phone numbers that “are of interest to the malware author,” according to Dharmdasani. If one of those numbers sends an SMS or makes a call to a compromised device, the malware intercepts it, refrains from sending the device a notification and removes the message from the SMS history.
While text messages are logged and sent to the C&C, phone calls are outright silenced and rejected.
Other information, like the phone’s International Mobile Station Equipment Identity (IMEI) number, its phone number, SMS address and channel ID are also collected, converted into JSON, then a string and sent off to the C&C as well.
Further information like the phone’s model, operating system version, associated network (GSM/CDMA) are sent off to the C+C in the same fashion.
While the C&C has since gone offline, FireEye researchers were still able to analyze how the server processed responses.
While FireEye’s blog post goes into the malware much more in depth, including a technical discussion of the malware’s “sandbox-evasion tactic,” it’s further proof that threats against Android – and even more variants of those threats – are continuing to stack up.
A small number of Tor exit relays are misbehaving, conducting man-in-the-middle attacks and monitoring encrypted traffic from users of the anonymity network.
Researchers from Karlstad University in Sweden published a paper this week examining the malicious behavior of some Tor exit relays and found 25 that were either behaving maliciously, or were misconfigured to the point where they would raise a red flag on the network. The nearly two dozen relays in question are a small fraction of the available exit nodes—as many as 1,000 at a given time—that act as a final gateway for a user’s traffic to pass before it hits the open Internet.
The experiment, conducted by Phillip Winter and Stefan Lindskog, began on Sept. 19 and was carried out using a free tool built by the two researchers called exitmap. The tool scans exit relays using a number of modules the pair developed that scan for common attacks such as man-in-the-middle, SSH, DNS, and even sslstrip attacks developed by researcher Moxie Marlinspike.
The scans went on for four months and 25 malicious or misconfigured exit relays were exposed. Most of the relays, the pair’s paper “Spoiled Onions: Exposing Malicious Tor Exit Relays” said, reside in Russia. Most of the attacks were man-in-the-middle attacks where someone tried to inject code into an encrypted traffic stream as it left Tor. Two sslstrip attacks were discovered, while a handful of others blocked traffic to pornography sites or social media sites in areas where censorship of the Internet is tight.
The Russian relays had the same fingerprint, leading the researchers to conclude the same person or group was behind those relays; the fingerprint characteristics include similarities in the self-signed certificates used by the relays and the use of the same root certificate called “Main Authority.” Most of the IP addresses belonging to those relays were run on the network of a virtual private system provider, the paper said, adding that several were on the same netblock belonging to GlobalTel-Net. The attacks, the paper said, may date back to February 2013.
Those Russian relays, the paper said, also took a great interest in users’ activities on Facebook and designed attacks that tried to tamper with connections to Facebook. The researchers wrote that targeting individuals using Tor is difficult, but less so is the targeting of classes of users based on their destination. The paper made no claim as to the identity of the attackers or what their interest in Facebook activity might be.
The use of a self-signed certificate in these attacks points to a lack of sophistication on the attacker’s part, in that self-signed certs trigger the about:certerror warning page on the Tor browser. Similar to Firefox, on which the Tor browser is built, about:certerror warns a user that the connection is untrusted and forces the user to click through if they wish to continue.
Winter and Lindskog wrote a separate post on the Tor Project blog that put the attacks into perspective, clarifying the risk and pointing out that the number of malicious relays is low.
“Tor clients select relays in their circuits based on the bandwidth they are contributing to the network. Faster relays see more traffic than slower relays which balances the load in the Tor network,” they wrote. “Many of the malicious exit relays contributed relatively little bandwidth to the Tor network which makes them quite unlikely to be chosen as relay in a circuit.”
They also point out that some of these same attacks are used on public Wi-Fi networks for example, and said the bigger issue is what they call the “broken” Certificate Authority system.
“Do you actually know all the ~50 organisations who you implicitly trust when you start your Firefox, Chrome, or TorBrowser?” they said. “Making the CA system more secure is a very challenging task for the entire Internet and not just the Tor network.”
American gas and oil companies have been targeted by a hacking group with ties to the Russian Federation for close to 18 months, a new research report indicates.
The attackers have leveraged watering hole attacks to infect users inside the critical infrastructure organizations to spread a remote access Trojan known as HAVEX. According to Crowdstrike’s 2013 Threat Report, released this morning, the RAT drops malware on compromised machines that sends system information to a command and control server, as well as credential-harvesting tools that steal passwords from browsers, and backdoors that communicate with the hackers’ infrastructure to drop additional payloads. It also uses RSA public key cryptography to encrypt and authenticate the malware files it drops. Generally attackers use low-grade encryption algorithms, said Adam Myers, vice president of intelligence at Crowdstrike.
“It’s well built. The people who had it built had more capable programmers than we’ve typically seen with the Chinese-based adversary,” said Myers. “That was something that piqued our interest when you see a nice clean piece of code like that. The functionality is something that you would typically expect but the leveraging of the RSA encryption algorithm is a lot more complicated than most of the stuff we see. Implementing public key cryptography is fairly unique for these types of attacks.”
Another noteworthy characteristic of the attacks, Myers said, is the fact that the attackers are querying the BIOS of machines inside these organizations.
“We’re not sure if they’re exploiting BIOS, but they are taking note of what BIOS is installed,” he said. “It’s possible they have some capability.”
Myers said that it’s not out of the realm of possibility for an attacker to copy out a machine’s BIOS and replace it with a custom BIOS. Such activity allows an attacker to maintain persistent presence on a computer, even if a hard drive is replaced, for example.
“And if you wanted to brick the machine, there’s no better way than to overwrite the BIOS,” Myers said.
The attacks are not limited to the U.S., Crowdstrike said; government agencies, manufacturing firms, defense contractors, healthcare and technology companies in Europe, the Middle East and Asia have also been targeted.
Crowdstrike said its data supports nation-state sponsorship of this campaign, given the sophistication of the tools, command and control activity, and the build-times of the malware samples and backdoor communication—all of which coincide with Russian working hours, the report said.
“The level and extent to which oil and gas were targeted was another thing to us that made it seem like it was very focused,” Myers said. “When you see that kind of focus in a targeted attack in terms of victimology, that’s something that gets your attention.”
“If you look back even as far as 2006, you see targeted attackers using a lot of Microsoft Office exploits until they exhausted all the low-hanging vulnerabilities in those products and then moved into Adobe and others,” Myers said. “It was the easiest way in; they’re not spending a lot of time looking for vulnerabilities, just using low-hanging stuff like Java to get around ASLR and stringing exploits together to get in. Anything that makes it easier for attackers…that’s why we’re seeing a lot of strategic web compromises.”
After months of public calls from privacy advocates and security experts, Verizon on Wednesday released its first transparency report, revealing that it received more than 164,000 subpoenas and between 1,000 and 2,000 National Security Letters in 2013. The report, which covers Verizon’s landline, Internet and wireless services, shows that the company also received 36,000 warrants, most of which requested location or stored content data.
Large Internet companies such as Google, Twitter, Facebook and Microsoft have been publishing transparency reports for several years now, detailing the volume and types of requests for information that they get from the government and law enforcement. The reports vary from company to company but typically include data on warrants, court orders and some information on NSLs. The government only allows companies to publish the volume of NSLs they receive in ranges of 1,000.
Critics have been pushing for mobile phone providers to publish similar reports, and those calls have grown louder in the months since the Edward Snowden NSA leaks began. Verizon is the second mobile phone provider to publish such a report, after Credo Mobile published its own earlier this month.
The most interesting piece of data in the report may be the fact that Verizon received about 35,000 requests for location information from law enforcement. More than two-thirds of those requests were in the form of court order. The company said that these kinds of requests are becoming more frequent every year.
“Verizon only produces location information in response to a warrant or order; we do not produce location information in response to a subpoena. Last year, we received about 35,000 demands for location data: about 24,000 of those were through orders and about 11,000 through warrants. In addition, we received about 3,200 warrants or court orders for “cell tower dumps” last year. In such instances, the warrant or court order compelled us to identify the phone numbers of all phones that connected to a specific cell tower during a given period of time. The number of warrants and orders for location information are increasing each year,” the report says.
Although some other companies will not produce location information and other sensitive data without a warrant, Verizon says in its report that it will do so “in response to a warrant or order”. The bar for a warrant is higher than it is for a typical court order, which only requires law enforcement to go before a judge. Warrants require a showing of probable cause that the data is somehow related to a crime.
More than half of the 321,545 total requests that Verizon received in 2013 were subpoenas. Unlike some other kinds of requests, the data that companies have turn over in response to a subpoena does not include content, such as texts or call content, but rather comprises information such as the name and address associated with a number or some transactional data. The company also received a large volume of court orders, more than 70,000. About 10 percent of those orders were pen register or trap and trace orders, which give law enforcement access to call data in real time.
“A pen register order requires us to provide law enforcement with real-time access to phone numbers as they are dialed, while a trap and trace order compels us to provide law enforcement with real-time access to the phone numbers from incoming calls. We do not provide any content in response to pen register or trap and trace orders. We received about 6,300 court orders to assist with pen registers or trap and traces last year, although generally a single order is for both a pen register and trap and trace. Far less frequently, we are required to assist with wiretaps, where law enforcement accesses the content of a communication as it is taking place. We received about 1,500 wiretap orders last year,” Verizon said in its report.
The restrictions that the government places on the way companies can report the number of NSLs they receive make it difficult to compare volumes between companies. However, the range of 1,0000-1,999 that Verizon reported is on the higher end of what’s been published by the various companies in their transparency reports recently. Unlike some of the other vendors who have published reports, Verizon detailed what kind of information it provides in response to an NSL.
“The FBI may seek only limited categories of information through an NSL: name, address, length of service and toll billing records. The FBI cannot obtain other information from Verizon, such as content or location information, through an NSL,” the report says.
Much of the Internet was inaccessible to Chinese users for more than an hour yesterday after a domain name system error – believed by some to have been the result of a censorship error – led Web-surfers to a blank page hosted by an American technology company.
While users were able to access Web-addresses hosted by China’s top level, .cn domain, the South China Morning Post reports that .com, .net, and .org domains would not resolve properly. Instead, users attempting to visit sites not hosted by China’s TLD were being redirected to a site owned an operated by Dynamic Internet Technology, a U.S. company that touts itself as a developer of censorship-defeating software. The company also reportedly helps host the Epoch Times and other sites banned by the Chinese government.
The South China Morning Post spoke with Dynamic Internet Technology CEO and founder, Bill Xia. He confirmed that the redirect website did indeed belong to his company but attributed the DNS issues to an error in China’s massive Web censorship system, often referred to as the Great Firewall of China.
“We noticed a sudden increase of traffic and suspected we were under attack,” Xia told the South China Morning Post. “Our security system has activated a protection mechanism so visitors to the address are not able to see any thing.”
Xia went on to claim that the incident bore similarities to another more than ten years ago in which China’s DNS restrictions backfired and routed Internet users to the website of a spiritual group known as the Falun Gong, a group the Chinese government reportedly considers a cult. It should be noted that the Epoch Times, one of Dynamic Internet Technologies clients, is often associated with the Falun Gong.
In contrast to Xia’s assertion, numerous reports indicate that Chinese officials and other hardliners are blaming the outage on a cyberattack.
There is a bug in the anti-cross site scripting filter in Chrome and Safari that enables an attacker to bypass the filter in some cases and use an XSS flaw on a given site to compromise visitors’s machines. The vulnerability is fairly simple to exploit and a researcher has posted proof-of-concept code.
The vulnerability lies in the way that anti-XSS filters handle a specific attribute in IFRAME tags. These filters are designed to prevent attackers from being able to use XSS flaws on vulnerable Web sites in order to run malicious injected code in users’ browsers. Exploiting this flaw allows the attacker to bypass the filter and run his injected code.
Palop said he informed Google of the vulnerability in Chrome back in October and the company developed a fix a couple of days later. The patch landed in the stable Chrome channel in the recent release of version 32. He said that the vulnerability still exists in Safari on Mac and iPhone, however. Eleven Paths contacted Apple about the flaw, but the company said it is still working on the issue.
“They confirmed our email, and told us they were working on it. And seems that they still are, since the program is still vulnerable. Everytime we have tried to contact back with them again, they reply back telling there is no news, but they are working on it,” the company blog post said.
Robert Hansen, a security researcher and director of product management at WhiteHat Security, said the attack could be a problem, although it’s not the most common XSS attack scenario.
“The attack does rely on being injected into an existing iframe tag. That does happen, but it somewhat rare compared to the more common HTML or parameter injection variants and is often also coupled to a “content spoofing” exploit as well as defined by WASC. Generally speaking people who use iframes should be wary of accepting user input to dictate the location of the frame and sanitizing input is always a good idea,” Hansen said.
Image from Flickr photos of Tiger Girl.
Spam emails promoting a non-existent PC version of the popular WhatsApp messaging service could be leading unsuspecting users to a malicious banking Trojan.
The emails, written in Portuguese, trick the recipient into thinking they already have 11 pending friend invitations, according to Kaspersky Lab’s Dmitry Bestuzhev, who wrote about the malware today on Securelist.com.
If users click on the “Baixor Agora” (Download Now) link in the email, they’re redirected – through a hacked Turkish server – to a Hightail.com URL to download the Trojan. Hightail, like Dropbox or YouSendIt, is a service that allows cloud file storage and downloads. The downloader then downloads the banker via a server in Brazil. According to Bestuzhev, the file comes disguised as a relatively small 2.5 megabyte MP3 file, making it more likely users will open it.
Once it’s set up the malware gets to work, stealing data, and packing it up and shipping it off to the cybercriminal before downloading new malware files, up to 10 megabytes in size, to the system.
“The malware reports itself to the cybercriminals’ infections statistics console and when open, a local port 1157 sends stolen information in the Oracle DB format,” Bestuzhev wrote today.
It’s unclear if the malware has made it to U.S. shores yet but given the popularity of WhatsApp abroad – especially in Europe and Latin America – it appears to be contained to those areas, at least for now.
Bestuzhev even goes as far as to call it a “classic style of a Brazilian-created malware,” as it appears to be targeting users in Brazil, a country with an established WhatsApp userbase and the Trojan is downloaded from a Brazilian server.
The cross-platform messaging app has been massively popular as of late, boasting more than 430 million users, 30 million added in just the last month, and sending more than 50 billion messages a day. Rumors Google was going to acquire the service last spring for roughly $1 billion bubbled up but quickly deflated.
The company’s CEO and co-founder Jan Koum has previously said the company makes a point to know as little as possible about its users and that it doesn’t collect people’s personal information, just users’ phone numbers and a list of users they want to communicate with.
While that may be true, it was reported in October that if someone wanted to eavesdrop on users’ WhatsApp conversations, it could be done, “given enough effort.”
Dutch researcher Thijs Alkemade disclosed a vulnerability in the app’s crypto implementation, specifically the fact that it uses the same key for incoming and outgoing messages, that could leave messages exposed. The company balked at Alkemade’s research however, deeming it taking place in a scenario “more theoretical in nature.”
This isn’t the first spam email campaign centered around the app. Spammers also leveraged the service in November to push malware via email by tricking users into thinking they had a new voicemail, even though WhatsApp does not provide a calling feature, it is a text messaging service.
Two Chrome extensions went from legitimate browsing ad-ons to adware-spewing nuisances in the blink of a legitimate transaction.
Google recently took action against the Add to Feedly and Tweet this Page extensions, removing both from the Chrome Store after they were sold to adware brokers and found to be injecting ads into pages visited by users. Big picture, the risk has been mitigated, but it also exposed a weakness in Google’s auto-update mechanism, which automatically inserted changes configured by the new owners of the respective extensions without a head’s up to users.
Amit Agarwal, a popular blogger in India, sold the Add to Feedly extension after receiving a four-figure offer, he said. The deal was too good to resist, especially considering the extension took him an hour to develop. Agarwal admits he did not know the buyer, nor why they would pay good money for a Chrome extension that had been downloaded more than 30,000 times when it was sold.
Agarwal said that within a month, the new owner had built in advertising and users were seeing ads injected onto random websites they visited.
“These aren’t regular banner ads that you see on web pages, these are invisible ads that work the background and replace links on every website that you visit into affiliate links,” Agarwal wrote on his website labnol.org. “In simple English, if the extension is activated in Chrome, it will inject adware into all web pages.”
Google pulled the extensions from the Chrome store because they were in violation of the quality guidelines established by the company. Google’s policy states that extensions must have a single purpose and users should not be forced to agree to additional functionality, especially if it is unrelated to the extension.
“If two pieces of functionality are clearly separate, they should be put into two different extensions, and users should have the ability to install and uninstall them separately,” the policy states, adding that this goes for bundled toolbars as well; Google says those should be separate extensions.
The spammers’ actions are clever. Purchasing popular extensions such as Agarwal’s, which he said was developed in response to Google’s decision to shut down Google Reader, provides spammers and adware purveyors with an effective vehicle to peddle ads for profit. Couple that with the fact they can piggyback onto Google’s silent auto-update mechanism makes for an inviting vector to push not only spam but even malware.
“The extension does offer an option to opt-out of advertising (you are opted-in by default) or you can disable them on your own by blocking the superfish.com and www.superfish.com domains in your hosts file,” Agarwal said of his old extension. “But quietly sneaking ads doesn’t sound like the most ethical way to monetize a product.”
If you think you’re being clever by basing your password on the site you’re visiting or adding a zero to the end of 123456789, you’re not. A new list of the 25 worst passwords, culled from public dumps of passwords stolen in data breaches, shows that these are some of the least useful passwords you can come up with. The good news is that “password” is no longer the most popular bad password. The bad news is that the new loser is even worse.
The most often-used password found in public password dumps in 2013 was “123456″, about as far as you can get away from being a complex password. The list, complied by SplashData, shows that “password”, which had been the most popular bad password for several years, feel to number two, while several variations of consecutive digits were also found in the top 10. The list reads like a primer on how to devise miserable passwords guaranteed to fall to a brute-force attack.
One of the major contributors to the database of publicly available user passwords was the Adobe data breach, which affected nearly three million users. A number of the passwords found in the top 25 list are clearly related to Adobe accounts, including “photoshop” and “adobe123″. The Adobe password list also contains a sad litany of lazy, simple passwords. For example:
These passwords violate pretty much every generally accepted piece of advice experts give about constructing strong passwords. No capital letters, no special characters, consecutive digits, etc. In short, these are the passwords that attackers hope for when they are trying to compromise a user’s account. And, unfortunately, it’s often what they get.
A new spambot has been discovered that generates copious amounts of HTTP POST and GET requests in an attempt to disguise what it’s really up to and throw off the scent of detection capabilities.
“In this case, it seems like it’s trying to hide impactful communication where there are actual payloads among innocuous requests don’t contain anything noteworthy,” said Ed Miles, a senior software engineer, malware research at Dell SonicWALL. “It’s hiding itself in its own traffic.”
The spambot, identified as Wigon.PH_44 by SonicWALL, is being served on compromised websites hosted on the WordPress platform. To date, there are up to 200 sites serving the malicious executable and Miles said that SonicWALL has recorded 15,000 hits in the wild on the malware signature, most of those in the United States.
The Trojan infects Windows machines, including Windows 8 64-bit systems, and not only sends spam, but researchers have also found a data-stealing component that searches victim computers for email and FTP applications such as CuteFTP, FTP Commander, FTP Navigator, FileZilla and more.
Miles and colleague Deepen Desai, a senior security researcher, also note that the malware has similarities to the Cutwail botnet, but aren’t ready to call it a variant yet.
“We were seeing the malware getting the [spam] email templates as part of the HTTP request, but they’re in an encrypted format; that is one of the things we have seen in the past with Cutwail,” Desai said. “I would say it’s too early to call it Cutwail, but based on the behavior we’ve documented, it seems similar.”
Cutwail is one of the most established spam botnets, and most prolific, sending at one point, millions of spam messages daily. It was two million compromised machines strong and used to distribute spam and financial malware targeting not only credit card data but credentials. The Cutwail emails often included links that would lead victims to sites hosting the Blackhole Exploit Kit, which would then inject downloaders for other malware such as ZeroAccess or Zeus.
Victims in the campaign uncovered by Dell SonicWALL are infected via drive-by download attacks from the compromised WordPress sites. Miles and Desai said they had no information on how the WordPress sites were compromised or what the vulnerabilities may be. Once it’s established a foothold, it connects to a command server and receives other instructions that include orders to spam out other malware families, the researchers wrote on the company blog.
Spambots and financial botnets have regressed a bit since the downfall of the Blackhole exploit kit, Cutwail included. When its alleged creator, a Russian named Paunch, was arrested in October, Blackhole and Cool, another alleged Paunch project, disappeared along with him. Cybercriminal gangs relied for years on Blackhole and its various webinject components to compromise websites and redirect victims to dangerous malware such as Zeus and ZeroAccess, both of which are prolific and handy at emptying bank accounts.
Now that Blackhole is gone, security researchers noticed that some gangs had upped their use of direct attachments in spam and phishing emails to spread malware such as Zeus—a much less efficient means of making a profit, experts said. Some gangs too, were not only pushing financial Trojans, but also ransomware such as CryptoLocker and PowerLocker in an effort to quickly regain revenue until a viable alternative to Blackhole emerged.
Cutwail was one such instance, researchers at Websense said, adding that some criminal outfits have tested the waters with a number of exploit kits including Neutrino, Nuclear Pack and Magnitude. Magnitude was poked and prodded by the Cutwail gang, Websense said, before it decided instead to rely upon emails containing malicious .zip files, numbers of which shot up in the wild.
Starbucks has patched a vulnerability in its iOS app that was found last week spilling user data, including usernames and passwords, by adding what it’s called an “additional safeguard measure” to protect its customers.
While it’s a relatively quick turnaround for the company – it only took about four days for it to push out a new version of the app – the security researcher who found the vulnerability is encouraging the company to give one remaining issue its fair shake. According to a post on Full Disclosure’s seclists.org Friday, security researcher Daniel Wood is hoping the coffee conglomerate takes a look at an outstanding geolocation issue still present in the application.
The issue isn’t a huge one – Wood says he doesn’t believe it’s even a security concern per se – but that it’s still worth fixing.
It involves a file stored on iOS devices under /Starbucks/Library/Preferences/com.starbucks.mystarbucks.plist that contains the location data of a users’ last logged geolocation. According to Wood the difference between this file and the old file, session.clslog, is that this information is the last location a customer has used their device and not a running log of where customers have been.
“I do recommend that the above issue [with mystarbucks.plist] be remediated within the next release cycle of the mobile application to prevent a customers’ last logged geolocation data from being stored,” Wood said in his write-up.
While the geolocation information is overwritten each time and can’t be used to track user movement patterns over time there’s a chance it could still could be used in coordinating an attack, perhaps in a social engineering capacity.
Last week it was discovered that a file (session.cslog) on version 2.6.1 of the app stored users’ personal information – their username, email address, address, geolocation data and password – in clear text. Starbucks initially dismissed Wood’s report, calling the vulnerabilities “theoretical” and asserting there was “no known impact” to their customers at the time.
The vulnerability was locally exploitable, Starbucks’ servers were never hacked and there was never a chance that users’ credit card info could have been in danger.
Late last week however the company’s Chief Information Officer Curt Garner released a letter to its users assuring them that “out of an abundance of caution” Starbucks was working hard to “accelerate the deployment of an update for the app.”
The company did just that on Friday when it released version 2.6.2 of the app. Now when users open the updated version “it clears session.clslog out, effectively wiping this data off your device,” according to Wood.
“This behavior makes sense as the application is required to run in order to execute the programmatic functions that address the issue of a static file that was being spooled to,” Wood rationalized.
With the updated app, since data elements are no longer being written to the session.clslog file in clear text, users should expect their information will be safe going forward.
Starbucks’ app is one of the most popular apps available for iOS and routinely appears in the Apple’s “Top 100 Free Apps” section. The app lets users connect their Starbucks card to their smartphone, reload funds via credit card and treat the phone like cash in stores worldwide.
For the people expecting President Barack Obama to announce sweeping changes to the NSA’s surveillance programs, his speech on Friday likely was a major disappointment. Obama laid out some new controls and limits for some of the more controversial programs, specifically the phone metadata collection system, but much of the speech focused on why the NSA’s programs work and why the existing oversight keeps it in check. Many privacy advocates and former intelligence officials decried the changes as window dressing, but in the wake of his speech, it’s become clear that some key government officials support Obama’s position and see little need for reform.
The metadata program has become the poster child for the NSA’s alleged abuses, overreaching and invasions of privacy. The program enables the agency to collect hundreds of millions of phone records from mobile providers every month and store them, under the theory that the agency might at some point need to query that database and see whether there are any calls that could pertain to a terror investigation. Obama announced some new limits on the ways those queries work and also said the government should no longer store that data, but it should instead rest with some third party. The message from Obama was clear: this program is not going away.
“I believe it is important that the capability that this program is designed to meet is preserved,” Obama said.
Two days later, Rep. Mike Rogers, the Republican chairman of the House Intelligence Committee, and Sen. Diane Feinstein, Democratic chairwoman of the Senate Intelligence Committee, appeared together on TV to discuss the president’s proposed reforms, and an interesting thing happened: they agreed. Asked whether the NSA metadata program would be killed, Feinstein said she couldn’t see it happening and believed that the program was necessary and appropriate.
“I don’t believe so. The president has very clearly said he wants to keep the capability,” Feinstein said. “We would agree with him. The NSA are professionals. They are vetted and carefully supervised.”
“The most important victory was the president standing up and saying the program didn’t have abuses,” Rogers said in the interview on Meet the Press.
As the chairs of the powerful intelligence committees, both Rogers and Feinstein have had classified knowledge of the NSA’s programs for years now, so the revelations have the last few months would have come as little surprise to them. They’ve had months and years to construct their positions on this issue, and what they came up with was an echo of a talking point. Metadata collection is important because it could prevent a terror attack. It never has, mind you, but it could.
Feinstein, who in Senate hearings has consistently defended the NSA and its programs, has used a variety of different arguments to support her position, including the debunked story line that metadata collection could have prevented 9/11. But she broke out a new one on Sunday, saying that the government is actually less of a threat to users’ privacy than corporate America is.
“When you look at what companies collect, the government doesn’t seem to be a major offender at all,” she said.
This is an argument that will be familiar to every parent on earth. It’s the equivalent of a child caught with his hand in the cookie jar saying, “But I only took one! Johnny took five!” While Feinstein’s statements may have some kernel of truth to them, her argument doesn’t hold up. Users typically have some level of awareness that they’re giving up data to ad companies, mobile phone carriers, Google, Apple and other companies. It’s the foundation of the Internet economy. We trade our personal information for convenience, discounts and access. But in the case of the NSA’s collection methods, only a tiny fraction of the population had any idea before these leaks started that the agency was amassing astounding quantities of data on Americans’ online activities, and those who did were in no position to discuss it.
But now that these programs are common knowledge and we’ve seen their scope and reach, using data collection by private companies as a distraction from the NSA’s activities is disingenuous. Certainly private companies collect massive amounts of data on their customers, and that’s a serious problem in its own right. But the government is not a for-profit organization and it’s meant to protect its citizens, not to treat them as suspects in pre-crime scenarios. Change, not misdirection, is what’s needed.