REDMOND, Wash.–The Microsoft Digital Crimes Unit has been spearheading botnet takedowns and other anti-cybercrime operations for many years, and it has had remarkable success. But the cybercrime problem isn’t going away anytime soon, so the DCU is in the process of building a new cybercrime center here, and soon will roll out a new threat intelligence service to help ISPs and CERT teams get better data about ongoing attacks. Dennis Fisher sat down with TJ Campana, director of security at the DCU to discuss the unit’s work and what threats could be next on the target list.
Threatpost: When you first started going out and doing the botnet takedowns, how much resistance did you see from people wondering why Microsoft was getting involved in this kind of thing?
Campana: Not much resistance at all, really. But we’re very careful about how we do this. We’re not just going out there shooting stuff. We walk in with a pile of legal documents. We’re asking for a judge to agree with what we found. We’ve tried really hard to be transparent with what we do. There are other groups out there that don’t have that same transparency. We’re an open book when it comes to the things we’re doing.
Threatpost: And this isn’t something that MIcrosoft does on its own. You’ve worked with other vendors on some of these actions. How important is that collaboration aspect of it?
Campana: Very important. We have a huge partnership program through our MAPP (Microsoft Active Protection Program) partners and that’s great. It’s bringing together people of a like mind. It’s been great to see that. I look forward to other companies doing this at some point.
Threatpost: Do you think that’s coming?
Campana: At the geek level, most of my counterparts in other companies want that to happen. We’re very lucky that we have tremendous support from the very top of the company on down for what we do. Without that top-down support, we wouldn’t be where we are. Folks at other organizations are working to get that. It’s necessary for this kind of work.
Threatpost: In the last few years the DCU has focused mainly on the botnet problem. Are there any other large threats looming out there that you’re looking at?
Campana: We’ve been working some on the problem of those phone scams where people call you up and tell you your PC is infected. That’s a huge problem. And we’ve done some work on scareware as well. But botnets are going to be the major issue for us to deal with I think. One thing that could become a bigger issue is mobile. It changes the way people are connected to the Internet. You’re connected to the Internet in a more permanent way. That’s the way computing is going, so the cybercrime would almost have to go that way, too. We’re also looking at some of the targeted attacks that are going after ad platforms. The problem of click fraud is a big one.
Threatpost: Once you do the takedown of a botnet and get through all of that, how much more is the DCU involved with what happens afterward?
Campana: It depends, but the idea is that we are working very hard behind the scenes before we go to the judge. We’re trying very hard to find the person who owns the servers we want to seize. When we go into a data center, that person isn’t there to defend himself, so we are working very hard to notify them that we took the servers. We want to find the person. We have to satisfy the judge that we did everything we could. We see a huge advantage in handing off a very nice package to law enforcement.
Threatpost: How is the Cyber Threat Intelligence Program you’re building going to work?
Campana: We’ve been testing it for about a year now. We’ve been sending emails once a week to the ISPs and CERTs we work with, and we looked at it and said, we’re a software company and a cloud provider, how can we marry those two to make this better. One of the huge assets for us is our scale. So we wanted to build something that scales. We’re signing up CERTs now for the new service. Right now the input for the service is only our MARS (Microsoft Active Response for Security) data. The second piece would be attack data from across the company. I want as much data as we can get.
Threatpost: How close it to being ready?
Campana: It works in the lab. But there’s a big difference between the lab and Internet scale. When you bring it into the real world, politics and other things get in the way.
Threatpost: One of the solutions to the botnet problem that people have talked about for years is having ISPs or security companies actively remove the malware from users’ machines. Is that a necessary step?
Campana: I want user consent. The user needs to take ownership of his own device. We have to balance what we could do and what we should do.
For every punch a hacker throws, there is a counter from a security company, and then, inevitably, the hacker adjusts again.
That’s what’s happening right now with the PushDo malware.
This week, Dell SecureWorks, Damballa Lab and Georgia Tech University combined on a research report exposing the fact that PushDo, a Trojan dropper largely responsible for Cutwail, one of the largest spam-producing botnets on record, was back. PushDo had returned en force with a domain generation algorithm that is capable of spinning up 1,380 .com domains every day in the event its two built-in command and control servers are offline.
The publication of the report clearly put the hacker group to work. Researchers at Seculert of Israel reported last night that a DGA found in two new variants of the malware generates .kz domains instead of .com, making the malware again difficult to detect and resilient against antimalware signatures.
“[DGA] is very effective against traditional and on-premises security solutions which are signature based,” Seculert CTO Aviv Raff told Threatpost. “There are already several malware families which have implemented this feature, and I expect to see more in the future.”
Raff said Seculert found the .kz domains on a number of hijacked websites serving the malware. The researchers took advantage of a misconfiguration on the attackers’ part to see a list of files on the folder of the PushDo variants. Two new executables, the new variants, were uploaded in the early afternoon on Wednesday to a server in Europe.
Dell SecureWorks and Damballa experts confirmed on Wednesday that the attackers were likely from Eastern Europe. While the new DGA domains are from Kazakhstan, that doesn’t necessarily mean the attacks originate from the former Russian state.
“Anyone can buy a .kz domain,” Raff said. “The interesting part though, is buying a .kz domain requires for the DNS server and the hosting to be at Kazakhstan.”
PushDo and Cutwail have been taken down numerous times by authorities. Each time it returns with new features making it more durable. The latest version, which researchers found in March, has infected anywhere between 175,000 and 500,000 machines, experts at Damballa and SecureWorks said. The malware is capable of detecting what security software is running on a compromised machine and is able of querying legitimate websites in addition to its C&C servers in order to blend in with regular Web traffic.
Researchers were able to sinkhole some of the command and control .com domains generated by the DGA and recorded more than 1.1 million unique IP addresses trying to connect to the sinkhole–an average of 35,000 to 45,000 daily requests were made.
DGA periodically generates and then tests new domain names and determines whether a C&C responds. This technique hinders static reputation servers that maintain lists of C&C domains and enables hackers to bypass signature-based and sandbox protections. It also cuts down the need for a large command and control infrastructure, lessening the chances it is exposed to researchers and the authorities. This version of PushDo was generating between nine- and 12-character dot-com domains.
In an Oslo Freedom Forum workshop offering advice to free speech advocates on how to better secure their devices against government surveillance, security researcher Jacob Appelbaum uncovered a new strain of malware with backdoor capabilities on the Mac machine of an Angolan activist attending the event.
Appelbaum is probably best known for his work with the online anonymity enabling Tor Project and for his affiliation with and various legal battles regarding the 2010 and 2011 publications of U.S. State Department cables by the online whistle-blower, Wikileaks. Appelbaum was also the first researcher to publicly detail the attack on the certificate authority Comodo.
F-Secure’s Mac analyst, known simply as “Brod,” is still in the process of investigating the malware, but his fellow F-Secure researcher Sean Sullivan notes that the sample is signed with a legitimate Apple Developer ID. It launches from the users and groups folder and dumps screenshots into another folder called “MacApp.”
The Trojan appears capable of number of fairly simple spying functions such as taking screenshots and uploading .zip files to name a couple. It also connects to two command and control servers, one in the Netherlands and one in France. At the time of his publication yesterday morning, Sullivan wrote that the French C&C server would not resolve and the Dutch one was informing him that he was forbidden from accessing it.
On Twitter, Sullivan and Appelbaum discussed that the Trojan appeared to be related to an older piece of Mac malware called HackBack.
Appelbaum claims that the Angolan activist’s Mac was compromised in a spear-phishing attack.
Apple has since revoked the Developer ID with which the malware is signed, according to a tweet sent by Appelbaum.
According to VirusTotal, one of 46 antivirus vendors is detecting the threat. The vendor is F-Secure, and they are identifying it as Backdoor: OSX/KitM.A. (SHA1: 4395a2da164e09721700815ea3f816cddb9d676e).
Mozilla has tapped the brakes on its plans to block third-party cookies by default in the Firefox browser.
Test versions of Firefox 22, scheduled for a June release, were supposed to include a patch that blocked third-party cookie drops by default. However, Mozilla CTO Brendan Eich said yesterday those plans have been temporarily put on hold for more testing.
Mozilla has been promoting this privacy-conscious decision for months, most publicly at the RSA Conference in February. Chief privacy officer Alex Fowler commented during a panel discussion about the practices of advertisers, data brokers and others who monitor and profit from users online behaviors. In particular, Fowler concentrated on the practice of third parties dropping cookies on users’ machines without the user’s consent and from sites the user has not visited. The policy, Fowler said, would state that in order for cookies to be placed on a user’s computer, the user must interact with the site, not third-party content on another site. Apple’s Safari browser blocks third-party cookies by default, and this is the model Mozilla is following.
This week’s announcement by Eich backpedals a little on Mozilla’s stance.
“The idea is that if you have not visited a site (including the one to which you are navigating currently) and it wants to put a cookie on your computer, the site is likely not one you have heard of or have any relationship with,” Eich wrote on his blog. “But this is only likely, not always true.”
Eich said Mozilla will refine its patch to address false positives and negatives. Eich offered an example where a user could visit a site that would embed a cookie from another site it owns as a false positive. As for false negatives, he said just because a user visits a site once should not be consent for that site to drop a cookie and track the user’s activities.
“Our challenge is to find a way to address these sorts of cases,” Eich said. “We are looking for more granularity than deciding automatically and exclusively based upon whether you visit a site or not, although that is often a good place to start the decision process.”
Eich said Mozilla will ship a refined version of the patch with blocking on by default.
“Our next engineering task is to add privacy-preserving code to measure how the patch affects real websites,” he said. “We will also ask some of our Aurora and Beta users to opt-in to a study with deeper data collection.”
This week, the patch, Eich said, moved to the Firefox 22 beta release, but it is not on by default. Users would have to opt in; the patch is on by default in the Aurora release. Eich said false positives can hamper the user experience on sites they visit, while false negatives enable tracking where it’s not wanted.
“We have heard important feedback from concerned site owners. We are always committed to user privacy, and remain committed to shipping a version of the patch that is ‘on’ by default,” Eich said. “We are mindful that this is an important change; we always knew it would take a little longer than most patches as we put it through its paces.”
Privacy advocates such as the Electronic Frontier Foundation have praised Mozilla’s intention to follow Apple’s lead here, yet recognized that making a change such as this could affect the bottom line of many advertisers.
Other privacy-related tracking measures such as Do Not Track are also political hot potatoes between privacy advocates and advertisers. Microsoft, for example, ships Internet Explorer 10 with DNT turned on by default, a signal to sites that the user does not want to be tracked. Some sites, however, will ignore the signal, and groups such as the Apache HTTP Server Project argue that Microsoft’s decision does not indicate the user’s wishes. Mozilla’s Fowler, meanwhile, said fewer than 15 percent of Firefox users send the DNT header.
“People are asking for a different level of privacy on your service, and you have to listen to that. It’s critical to the business and web ecosystem,” Fowler said at RSA. “At Mozilla, we also do online advertising campaigns and email outreach. We try to think about the tracking we impose on users, so we are making an effort to work with vendors who are willing to respect the DNT header. It’s not a condition, but we think it’s important for organizations advocating for this that we spur service providers to understand and respect it.”
REDMOND, Wash.–Cybercrime has developed in the last few years into a major concern, not just for the consumers and businesses that are victims, but also for governments around the world. Obama administration officials have called it one of the larger threats to the United States economy. While law enforcement agencies handle the investigative and prosecutorial piece of things, they are increasingly being aided by experts at companies such as Microsoft, Google and others that have unique insights into attackers’ activities and the capability to make life more difficult for them.
Microsoft, for one, has taken a very aggressive stance on cybercrime in recent years. By virtue of its massive user base, the company has a lot of visibility into the ways that attackers are exploiting not just Microsoft products, but also other applications installed on target machines. As one would imagine, Microsoft officials take a dim view of attackers going after their customers, and the company has been using a variety of methods for preventing cybercrime and punishing those involved in it. The most visible piece of this arsenal is the Microsoft Digital Crimes Unit, a small group of engineers, security experts and lawyers here who spend their days tracking botnet operators, malware writers and helping law enforcement agencies around the world identify and find them.
It may seem odd that a software vendor, even one as large and influential as Microsoft, would fund such a team, but company officials say it’s an important part of keeping customers safe. If cybercrime can’t be prevented, the DCU members want to be sure that it is less attractive and less profitable for the attackers who choose to get involved.
“The bad guys are getting better at what they do, and we want to be a force-multiplier for good. Our job is not law enforcement. Our goal is to transform this fight to really disrupt and destroy the way cybercriminals operate,” said T.J. Campana, director of security at the DCU.
The biggest target thus far for the DCU team has been the botnet problem. Botnets are used for a variety of nasty purposes, especially spam, DDoS attacks and data theft. Microsoft and other vendors have been tackling the problem from various angles for years now, but the tool that they’ve found to be the most effective involves a combination of legal and technical means of crippling a botnet. The company, along with law enforcement agencies and other vendors, have succeeded in taking down several botnets in the last few years, including Kelihos, Zeus, Waledac and Rustock. In many of these cases, along with sinkholing the target botnet’s command-and-control servers, the Microsoft DCU team has used court orders to physically seize servers. This tactic has been somewhat controversial, but Campana said the nature of the threats has made it necessary.
“Botnets are the backbone of the modern cybercrimnal,” he said. “We’re severing the connection between the harmed customer and the bad guys. We’ve had court orders to go in and rip the servers out of the data centers. It doesn’t get any cooler than that. But it’s an extraordinarily high burden of proof to be able to do that.”
Disrupting botnets can be a frustrating business, as Microsoft has found out, as attackers often will react to a takedown by simply moving to a new infrastructure, finding pliable hosting providers and getting back to business. That’s always going to be a possibility, especially when attackers are able to buy bot toolkits cheaply and quickly build up a new network of compromised machines.
“The cost of entry into cybercrime is very low and the profits are high. We want to increase the cost of the bad guys getting into the cybercrime business,” he said. “And if they do get in, we want to decrease their ability to make money. We want to demotivate this kind of activity.”
But Campana said the takedowns are only one piece of the larger picture. The company is in the process now of building a new cybercrime center at its headquarters here, and DCU officials hope to make it a nerve center for anti-cybercrime operations across the industry. As part of an effort to help speed up the pace at which it is able to respond to new emerging attacks on its customers, Microsoft DCU also is working on a new Cyber Threat Intelligence service which Campana said could serve as a two-way communication channel to help get information and remediation tools out to cybercrime victims much more quickly.
“I want to get to a place where the bad guy launches a new attack, and within a couple of minutes we can respond and get a message to victims,” he said. “I want that identification, notification and remediation happening as quickly as possible.”
Campana’s team also is working with a number of outside companies and groups to help make it more difficult for attackers to get access to the tools they need for their operations. One way they’re doing this is by working with hosting providers, which are key cogs in many cybercrime machines, especially botnets. Attackers often use so-called bulletproof hosting providers to house their C2 servers for botnets, malware distribution and phishing campaigns. But they also will take advantage of legitimate hosting providers who aren’t aware of what’s going on. Campana said his team is working with many hosting companies to fix this. They’re also talking with domain registrars to prevent attackers from being able to register dozens or hundreds of domains quickly for use in fast-flux botnets.
“A lot of the domains they register are just randomly generated numbers and letters. We’re talking with the hosting providers and registrars to say, let’s just not let these kind of domains be registered ever,” he said.
While the DCU has seen plenty of success so far, Campana said there’s no shortage of challenges looming for his team and others interested in disrupting cybercrime.
“The bad guys are moving at such a fast pace and they’re changing their tactics on the fly,” he said. “They don’t play by the rules. They don’t have any rules, and we have to find a way to make it harder for them.”
A new malware campaign has been hitting Pakistan hard over the last few months and after a little e-sleuthing, it appears the not-so-stealthy attacks have been originating from nearby India and exploiting a certificate to run its binaries.
Security firm Eset has a full rundown of the campaign today on its WeliveSecurity.com blog by malware researcher Jean-Ian Boutin, including an array of details involving how the attack has been executed and the types of payloads being deployed on unsuspecting Pakistanis’ computers.
This campaign relies on the exploitation of a bogus, digitally signed certificate from the Indian company Technical and Commercial Consulting Pvt. Ltd. Initially issued in 2011 and revoked for files used after March 2012. Still though the cert was still used to sign more than 70 different malicious binaries on and off from that March until September of that year.
The malware uses two vectors – the first is a well-known Word document vulnerability, CVE-2012-0158, that’s been used in everything from the Red October campaign to a bevy of attacks against Tibetan and Uyghur users as of late. The other vector spread Word and PDF files that once opened, “downloads and executes additional malicious binaries.” Some of those files are disguised as “pakistandefencetoindiantopmiltrysecreat.exe” and “pakterrisiomforindian.exe,” according to the blog post.
Payloads are set up to glean data – screenshots, keystrokes, documents in the computer’s trash – from users’ computers and in turn send them to the attackers’ servers. Interestingly enough, as Boutin notes, the information is being uploaded to the attacker’s computer unencrypted, so it’s easy to see what exactly is being transferred.
The blog also notes a number of Indian connections, including the mysterious Indian code signing certificate, references to Indian culture in the binaries and signing timestamps between 5:06 and 13:45, consistent with eight hour shifts worked in India.
An accompanying graph in the blog entry suggests that while other nations are being hit by the campaign, it’s largely affecting Pakistan, with 79 percent of the targets affecting that South Asian country.
A similar type of malware, Redpill, was found hijacking users in India last month. That campaign also stole screenshots, in addition to bank account credentials and email information and was the second coming of a malware strain that made its first appearance in 2008.
Boutin’s full research on the malware targeting Pakistan is being presented at the Caro Workshop, a security conference in Bratislava, Slovakia tomorrow. For more on his research, head to ESET’s blog.
Many popular online services have started to deploy password strength meters, visual gauges that are often color-coded and indicate whether the password you’ve chosen is weak or strong based on the website’s policy. The effectiveness of these meters in influencing users to choose stronger passwords had not been measured until recently.
A paper released this week by researchers at the University of Cal Berkeley, University of British Columbia and Microsoft provides details on the results of a couple of experiments examining how these meters influence computer users when they’re creating passwords for sensitive accounts and for unimportant accounts.
The long and the short of it: It depends.
Users, despite a barrage of news about stolen credentials, identity theft and data breaches, will re-use passwords over and over, especially at account creation, regardless of the presence of a meter. If the context changes, however, and users are asked to change existing passwords on sensitive accounts, the presence of a meter does make some difference.
“I didn’t expect them to have any effect,” said Serge Egelman, a UC Berkeley researcher, in an interview with Threatpost. Egelman, along with University of British Columbia colleagues Andreas Sotirakopoulos, Ildar Muslukhov, and Konstantin Beznosov, and Cormac Herley of Microsoft, began their experiment as a means of testing a new type of meter they developed that measures password strength relative to other users. What they learned instead is that peer pressure isn’t as effective as the context in which the meter is shown.
The experiment was two-fold, first in a lab and then in the field. In both instances, none of the participants knew they were taking part in a password study. There was also a control condition for both studies where a meter was not presented. For sensitive accounts where users see a meter, Egelman said, the users deployed strong passwords. In the field experiment conducted against “unimportant accounts,” the meter made no difference and most of the time users re-used old passwords.
“We conclude that meters result in stronger passwords when users are forced to change existing passwords on important accounts and that individual meter design decisions likely have a marginal impact,” the team wrote.
Password re-use has some obvious risks, the worst being that if a hacker compromises one password on an unimportant account, for example, they could use that password on more sensitive accounts protected by the same secret code.
“We don’t have anything better [than passwords],” Egelman said. “That’s what it comes down to. All of the problems we generally see with passwords are as a result of poor policies and stems from the frequencies we see of databases getting disclosed. If more work was done to secure stored encrypted passwords, less effort would need to be done on the users’ end.”
With 75 percent of the Alexa top 20 websites using some sort of meter, Egelman said, there is an expectation that users will choose stronger passwords if a meter is present. The team’s experiments demonstrated noticeable changes in password strength with the presence of a meter if the user was prompted to change their password, for example because of a policy mandate that demands passwords be changed periodically. The test results show that the presence of either a weak-to-strong meter, or a meter comparing passwords against those of other users did nudge them toward stronger passwords, while those without a meter continued to re-use old or weak passwords. Users also chose longer passwords, used more symbols and lower-case letters.
The 47 participants were users affiliated with the University of British Columbia who used the school’s single sign-on system for access to student accounts and a campus portal. They were not informed they were taking part in a password study, instead were told they were testing the usability of the portal. Once they logged, a notice popped up that their passwords had expired per policy and they were required to change them.
The field experiment, meanwhile, was conducted against less important accounts for 541 participants, many of whom re-used weak, existing passwords. In an exit survey, only 13 percent remembered seeing strength meters and others said the meters would have labeled their passwords as weak.
“We found that reused passwords were not observably weaker than the passwords of those who claimed not to have reused passwords. Thus, the extent to which password reuse impacts strength remains unclear,” the team wrote in its paper. “We believe that effects stemming from participants’ perceptions about the unimportance of the website outweighed any effects relating to the meters or their choice to reuse existing passwords; when passwords were reused, weaker existing passwords were employed.”
The team concluded that the presence of meters upon site registration, for example, is not as effective as when the meters are not associated with a registration, and that participants are likely to choose weak, easy-to-remember passwords they’ve used before if not prompted to check their strength.
“We’re not going away from passwords any time soon. I would like to see more focus on acceptable password policies in terms of balancing the burdens on users with site security requirements,” Egelman said. “A lot of the burden is placed on users, and that results in forgetting passwords and those add up as costs for organizations in terms of resets and support calls. If sites did things differently in terms of how passwords were protected on the backend, a lot of password requirements could be loosened.”
Four times since 2008, authorities and technology companies have taken the prolific PushDo malware and Cutwail spam botnet offline. Yet much like the Energizer Bunny, it keeps coming back for more.
In early March, researchers at Damballa discovered a new version of the malware that had adopted a domain generation algorithm (DGA) in order to not only help it avoid detection by security researchers, but to add resiliency.
Cutwail has historically been one of the largest spam botnets, hoarding millions of compromised computers that have sent billions of spam messages through the years. The malware is installed on compromised machines by the PushDo dropper Trojan.
This version of PushDo has infected anywhere from 175,000 to 500,000 bots, researchers said. Past versions have been able to collect system data in order to determine which antivirus software and firewall processes were running on a compromised machine. The latest iteration, in addition to its DGA capabilities, can also query legitimate websites such as universities and ISPs in order to blend in with regular web traffic and trick sandbox-type analyses.
The added domain generation algorithm capabilities enable PushDo, which can also be used to drop any other malware, to further conceal itself. The malware has two hard-coded command and control domains, but if it cannot connect to any of those, it will rely on DGA to connect instead. This capability was only recently discovered.
“On the technical side of writing (DGA) code, there are enough examples out there that the average hacker could do that part,” said Brett Stone-Gross, Counter Threat Unit Senior Security Researcher, Dell SecureWorks. “The more difficult thing is having the infrastructure set up and the organization to know you need new domains set up and registered. This takes more organization than hackers in the past have demonstrated and shows how sophisticated some botnet operators are getting with business plans and having the commitment to follow a plan.”
Researchers at Dell SecureWorks, Georgia Tech and Damballa were able to sinkhole some of the command and control domains generated by the DGA and recorded more than 1.1 million unique IP addresses trying to connect to the sinkhole–an average of 35,000 to 45,000 daily requests were made.
While most traditional malware carry built-in C&C domain names, this tactic becomes moot if researchers get their hands on the binary and block or sinkhole it. As a counter-tactic, malware writers began dynamically sending regularly updated configuration lists with new C&C server information, yet this was vulnerable to interception as well.
DGA is the latest countermeasure. These algorithms will periodically generate and then test new domain names and determine whether a C&C responds. This technique hinders static reputation servers that maintain lists of C&C domains and enables hackers to bypass signature-based and sandbox protections. It also cuts down the need for a large command and control infrastructure, lessening the chances it is exposed to researchers and the authorities. This version of PushDo generates between nine- and 12-character dot-com domains.
PushDo joins Zeus and the TDL/TDSS malware families in using DGA. Damballa learned from passive DNS analysis it conducted that PushDo was generating more than 1,300 unique domain names every day, most of these lasting just a day, cutting into the effectiveness of blacklisting operations.
“This one is very similar to Zeus as far as effectiveness,” said Jeremy Demar, Senior Threat Analyst, Damballa. “Zeus’ primary communications method was peer-to-peer. If it’s in a corporate environment that blocks peer-to-peer, it falls back to DGA. This is very similar in capabilities and effectiveness.”
Among the 1.1 million IPs connecting to the PushDo DGA domains were a number of government organizations, government contractors and military networks.
“It’s a relatively small population on the interesting list as far as numbers go, but because of the level of sensitivity of those organizations, we made sure to let everyone know,” Stone-Gross said, adding that a takedown similar to some of the previous efforts requires a lot of legal and technical cooperation. Both companies hope that awareness of this issue will lead to updates of endpoint protection technologies.
For the second time this month, the civil war-torn nation of Syria lost its connection to the Internet this morning before emerging from the Internet blackout several hours later, according to information provided by Arbor Networks.
Google’s Transparency report webpage revealed that at around 2:56 AM GMT, Syria’s connection to the outside Internet disappeared. The Internet then appeared to return at around 11:12 AM GMT. Google warns however, that these data points are still being finalized and should be interpreted with caution. Arbor’s Threat Level Analysis System expressed no such uncertainty.
If Syria did fall offline again, it would be the third time in seven months. Thus far, there have been no clear answers on how or who is responsible for the outages. Earlier this month, researchers at Umbrella Security, a subsidiary of OpenDNS, linked the outage to an issue with border gateway protocols (BGP), however, it was unclear then and remains unclear whether the BGP problems caused the outage, or whether the outage caused the BGP problems. Either way, the Internet has played an integral role in this conflict like none before.
A hacker-collective of pro-Bashar al-Assad lackeys known as the Syrian Electronic Army has launched opportunistic and seemingly random attacks against apparent Syrian rebel sympathizers. The SEA first made a splash in the U.S. after hacking into and defacing a Harvard University site. They have also claimed responsibility for successfully compromising Twitter accounts belonging to the Onion and the Associated Press.
There has also been a healthy dose of controversy surrounding and condemnation toward western security firms accused of profiteering by selling Internet monitoring technology to the Syrian regime. Since the beginning of the conflict in the Mediterranean nation, there has been a consistent flow of reports accusing the Assad regime of using malware, keyloggers, and other sorts of cyberattacks to spy on and impede the work of pro-rebellion activists and dissidents.
On the other side too though, activists in Syria and around the world are playing a part in the cyber-arms race by developing new tools and ways of circumventing government surveillance attempts.
Industrial control system and SCADA honeypots have been tried before with relative success. While those systems were enticing to hackers who hammered away on them, they were also complicated, required real ICS and SCADA gear, and weren’t publicly available.
Two researchers from Norway and Denmark hope to change that dynamic with Conpot, short for Control Honeypot. Their project is a simple configuration for now, with a relatively small attack surface. They’re hoping to collect data from those who take what they started, deploy it on their own critical infrastructure networks and share the findings.
“The main goal is to make this kind of technology available for a general audience,” said Lukas Rist, a member of The Honeynet Project. “Not just for security researchers, but also people who are sysadmins setting up ICS systems who have no clue what could happen and want to see malware attacks against their systems and not put them in any danger,”
Rist and his fellow Honeynet Project partner Johnny Vestergaard have deployed one ICS honeypot already in a default configuration that simulates a basic Siemens SIMATIC S7-200 programmable logic controller (PLC). The configuration includes an input/output module and a Siemens communications processor CP 443-1 needed for network connectivity. Conpot supports two major ICS protocols, Modbus and SNMP, standard interfaces connecting industrial control systems and controllers.
“We formatted Conpot in such a way that it’s easy to customize and adjust it as a proof of concept,” Vestergaard said. “One of the main points is how easy it is to configure this honeypot. All of the configurations, everything from Modbus to the values of the internals and memory is all contained in a single XML file. We hope at one point, people will begin to customize this XML file and send it to us.”
Unlike other experiments with ICS honeypots that had access to real control systems, such as one conducted by Trend Micro’s Kyle Wilhoit and presented this spring at Black Hat EU, Conpot is strictly a copycat with some values of real gear making it look realistic enough for hackers to test.
“You want to emulate the fingerprint of a machine,” Rist said. “What Kyle deployed was one real machine in a honeypot. To achieve the same result, we need to emulate the realistic values of acertain kind of controller. Siemens has some special addresses it uses. If your honeypot runs on the SNMP protocol used by Siemens, the attacker can confirm this. It’s a look-alike competition.”
Rist and Vestergaard found documentation online for the Siemens gear they’re emulating, finding the values they were looking for in screenshots from real PLC gear. They have deployed one instance of their honeypot with a relatively small attack surface and have already been attacked three times. For example, there is no HMI or human machine interface, a visualization of SCADA or industrial control equipment, connected to their PLC. Doing so would make their honeypot much easier to find via a special Google or SHODAN search, they said.
Rist said it was Wilhoit’s project that sparked his interest in building Conpot. Wilhoit’s honeypot ran for 28 days and mimicked an Internet management interface for a water pressure station, a server hosting an HMI system and another server hosting a PLC. Wilhoit told Threatpost that attackers had purpose-built malware for the particular gear he was using and tried to modify industrial processes once they accessed a system. Wilhoit’s honeypot was attacked 39 times, most of those attacks originating in China, Laos and the U.S. His dummy sites were ripe targets: default configurations and credentials were left in place, and the system names were optimized for Google searches.
Wilhoit said he observed attackers logging in, making changes to processes such as raising water temperatures or shutting down pumps, and logging out. A dozen of the 39 attacks targeted the specific gear in the honeypot and 13 were repeated by the same attackers.
Once others make use of Conpot, the data collected could be invaluable, Rist and Vestergaard said. The duo said they’ve already been contacted by a number of national CERTs as well as different academic institutions.
“One university guy contacted us who is created an IPS for industrial control systems,” Rist said. “This data will be helpful to train systems that attackers are looking for. There are a lot of different applications for this data.”
Rist said they hope to support more protocols in the future such as DNP3, as well as general protocols such as HTTP, FTP and SSH in order to simplify HMI integration.
“This is such an interesting topic because most of those systems never expected to show up on the Internet,” Rist said. “It’s quite a critical topic. No matter if it’s a big power plant or a small water pump, it’s easy to find those systems and play around with them. This stuff is so critical to national infrastructures and our needs; that’s what makes this so important.”
Mozilla fixed eight vulnerabilities, three critical, in the 21st build of its flagship Firefox browser yesterday.
One of the fixes remedies an Address Sanitizer memory corruption flaw (MFSA 2013-48) that could’ve allowed remote code execution. The other two critical flaws could’ve also led to arbitrary code execution and deal with fixing memory safety bugs (MFSA 2013-41), and a video resizing bug (MFSA 2013-46) in Firefox and Thunderbird.
For a complete list of the bugs fixed by Firefox 21, all 681 of them, head to Bugzilla.
The latest version of the browser also introduces something Mozilla is calling the Firefox Health Report, a tool that aims to give users a comprehensive look into the browser’s health and usage. The report will breakdown any insecure and unstable plugins it blocks throughout the day and will also document crash history and malware attack history, according to a post on Mozilla’s Future Releases blog by Jonathan Nightingale, the company’s Vice President of Engineering.
Users can choose whether they want the tool to share data Mozilla gathers about their browser with the company. If shared, the information will be aggregated and anonymized and used to help Firefox’s security team improve the browser. Users can change their preferences in the Data Choices section of the browser’s Options menu.
The update also brings expanded social API and Do Not Track options to help users better customize their privacy settings.
The social API opens the browser up to sidebar and toolbar providers like Cliqz, msnNOW and Mixi, while the Do Not Track update tweaks an already existing setting in the browser. The new default privacy setting doesn’t tell websites anything about the users’ tracking preferences. Users can change that and choose whether they want to tell sites if they want to be tracked (Do Track, Do Not Track, No Preference) in the settings.
The updates are being pushed to Firefox users via the browser’s automatic update system, per usual. Those who don’t have that set up can download them through both the Firefox and Thunderbird download pages.
The security community is one that thrives on controversy, drama and debate. For years–decades, really–no topic satisfied this desire like vulnerability disclosure. Long after every possible argument had been forwarded and the horse was not just dead but buried and the grave covered by a strip mall, the debate has limped along, like Happy Days post-shark jump. Now comes the flood of bilious opinions regarding the commercial exploit market, a discussion that feels even more pointless than the disclosure debate because there’s absolutely nothing to debate.
In the beginning, the disclosure debate was just that, a debate. People with well-formed opinions based on their experiences with finding and publishing vulnerabilities, or, on the other end of the equation, dealing with those reports and fixing the bugs. Most researchers argued that they had the right to do what they wanted with the vulnerabilities they found. For a long time, researchers generally kept details private and dealt with the vendors in the background, only publishing the details when a fix was ready. There were exceptions, researchers who simply published what they found whenever they felt like it, either never notifying the vendor or doing so a day or two before they posted their advisories.
That dynamic changed gradually as some researchers began using the possibility of full disclosure as a hammer to pressure vendors into responding to advisories more quickly and dealing with researchers in a professional manner. Some vendors got with the program, others didn’t. Some researchers chose to work with vendors within a loosely defined set of guidelines, others didn’t. And so it’s gone for the last decade or so.
There are reasonable arguments to be made on both sides of the disclosure debate, and there are smart, thoughtful people articulating a variety of positions. But there’s also a huge amount of invective, finger pointing and name-calling involved, all of which may be fun to watch, but it’s not very productive.
There are a lot of echoes of the disclosure debate in the current discussions about exploit sales. The commercial exploit market has developed relatively quickly, at least the public portion of it. Researchers have been selling vulnerabilities to a variety of buyers–government agencies, contractors, other researchers and third-party brokers–for years. But it was done mostly under cover of darkness. Now, although the transactions themselves are still private, the fact that they’re happening, and who’s buying (and in some cases, selling) is out in the open. As with the disclosure debate, there are intelligent people lining up on both sides of the aisle and the discussion is generating an unprecedented level of malice.
One difference this time around is that there are large piles of currency involved, not to mention the privacy, security–and in some cases, physical security–of people in countries around the world. Governments are buying exploits and using them for a variety of purposes. Some are using them to spy on their own citizens, while others are using them to attack their enemies’ networks. And government contractors and other private buyers are purchasing them for their own uses, as well.
Debating the morality or legality of selling exploits at this point is useless. This is a lucrative business for the sellers, who range from individual researchers to brokers to private companies. There are millions of dollars involved, and with that much money at stake, this business is not going away. And it is a business, make no mistake. Some sellers, such as VUPEN, say that they only sell exploits to NATO governments and will never sell to oppressive regimes. Chaouki Bekrar, the VUPEN CEO, has told me this many times, and I’ve heard him say the same thing to any number of other people in the last few years. I am inclined to believe him. But that’s almost beside the point. The issue is that once the exploit is sold, there’s no way to know how it will be used or who it may be shared with. A government buyer could act as a front for a third party that wouldn’t be able to buy the exploit on its own. And VUPEN is just one company. There are countless others that don’t have such explicit rules.
If you need a possible example, look no further than the odd situation that Moxie Marlinspike found himself in recently. Contacted by agents of the Saudi Arabian telecom company Mobily for help with technology to enable interception of traffic from Twitter, Viber and other apps, Marlinspike looked at a design document the group volunteered. He saw that they were contemplating buying SSL exploits as a way to solve their traffic-intercept problems. Marlinspike declined to help with the project, but said that he assumes Mobily will find a way around the issue.
“Their level of sophistication didn’t strike me as particularly impressive, and their existing design document was pretty confused in a number of places, but Mobily is a company with over 5 billion in revenue, so I’m sure that they’ll eventually figure something out,” Marlinspike, a security researcher and former Twitter security official, wrote.
“What’s depressing is that I could have easily helped them intercept basically all of the traffic they were interested in (except for Twitter – I helped write that TLS code, and I think we did it well). They later told me they’d already gotten a WhatsApp interception prototype working, and were surprised by how easy it was. The bar for most of these apps is pretty low.”
That kind of national-scale surveillance is just one application for exploits, commercial or otherwise. As Marlinspike said, even without his considerable knowledge and talent, it’s likely that Mobily had already found its own method for intercepting WhatsApp traffic. Governments, telecoms and other well-funded groups will find a way, whether it’s through their own research, the purchase of commercial exploits or some other method.
The debate shouldn’t be about whether exploits should be sold–they are, and nothing short of an outright legal ban is likely to change that. A commercial market has emerged for this information and markets with willing buyers and sellers don’t simply disappear. They typically expand until either the supply or the demand reaches a limit. There’s no shortage of demand for exploits right now, and the supply will continue to flow as long as the money is there.
Welcome to the era of surveillance.
Microsoft wasted no time today delivering a patch for the Internet Explorer 8 vulnerability being exploited in watering hole attacks carried out against the U.S. Department of Labor website and nine others worldwide. Today’s Patch Tuesday security updates also include a fix for IE vulnerabilities exploited during the Pwn2Own Contest earlier this year.
Details on the DoL attack quickly emerged following the initial reports on May 1 that the agency’s Site Matrices Exposures site has been compromised and likely targeting DoE researchers working on nuclear weapons programs. This week it was revealed that a site in Cambodia was also serving malware exploiting IE 8 vulnerabilities targeting workers for the United States Agency for International Development (USAID).
Microsoft urges consumers and business users still on IE 8 to patch the browser immediately, or upgrade to newer versions. In the meantime, some experts are calling on Microsoft to consider revamping its browser update method to perhaps model that used by Mozilla and Google.
“On one level, this is Microsoft at their security best. They responded promptly to a publicly disclosed issue and got the fix out in the next scheduled wave of patches,” said Rapid7 senior manager of security engineering Ross Barrett. “On another level, this issue, along with the fact that every single month we see another round of critical Internet Explorer patches, highlights what is wrong with Microsoft’s patching and support models.”
Microsoft has updated IE in every Patch Tuesday update this year, including an out-of-band patch in January that resolved a vulnerability used in another watering hole attack.
“Compare this to Google’s Chrome browser, which quietly patches itself as fixes become available and has no down-level supported ‘old version,’ which exposes millions of their users to risk. Or compare it to Firefox, which has straddled the fence with periodic Long-Term-Support (LTS) releases for the risk adverse IT departments but now defaults it’s users to the same model as Chrome,” Barrett said. “Microsoft is tying up resources in maintaining the older versions and extending the window by which users are exposed to risk with their opt-in updates and periodic patching model.”
Microsoft resolves the IE 8 bug in MS13-038, one of 10 bulletins released today. The critical update supplants a temporary Fix-It mitigation Microsoft released last week, a MSHTML Shim Workaround for CVE-2013-1347. The vulnerability is present in IE 8 only and is a use-after-free memory corruption flaw that enables remote code execution, and while IE 8 is an old version of the browser, it still has the highest market share with 23 percent, according to Net Market Share.
MS 13-037, meanwhile, also has expert concerned now that details are public. It is a cumulative update for IE that addresses the Pwn2Own vulnerabilities exploited by security company VUPEN.
VUPEN CEO Chaouki Bekrar told Threatpost his researchers used four zero-day exploits against Microsoft products during Pwn2Own, including an memory corruption, sandbox and ASLR-bypass bugs affecting IE 6-10.
“The exploit is rated a ‘1’ on the Microsoft Exploitability Index, meaning that Microsoft expects exploits to be developed within the next 30 days and that the attack vector would be a malicious website,” said Wolfgang Kandek, Qualys CTO. “Patch this vulnerability as soon as possible.”
MS13-039, meanwhile, is rated important, but could lead to a denial-of-service condition on boxes running Windows’ IIS webserver software. The vulnerability could be disruptive to organizations running remote services or Active Directory integrations on http.sys.
“The good news is that only Windows 2012 web servers are affected. All IT security teams should be jump on this quickly as an exploit is likely to be developed very quickly. A successful exploit could cause a DoS on affected servers creating temporary outages,” said Andrew Storms, director of security operations for nCircle, a Tripwire company. “The bad news is that a successful exploit of this bug could have serious implications for public web servers without some kind of inline IPS in front of them. Essentially, any user could launch a simple attack and the server will essentially be offline. It’s also worthwhile to note that many Microsoft servers have IIS turned on — including Exchange and SharePoint– so a successful exploit could potentially impact critical company infrastructure.”
The remainder of the bulletins were rated important by Microsoft and include a number of remote code execution, information leakage and privilege escalation bugs.
- MS13-40: patches a spoofing vulnerability the .NET framework that could allow an attacker to modify the contents of an XML file
- MS13-41: fixes a flaw on Microsoft Lync that could enable remote code execution if an attacker tricks a user into viewing malicious content.
- MS13-42: takes care of vulnerabilities in Microsoft Publisher that could allow an attacker to remotely execute code if a user opens a malicious Publisher file
- MS13-43: patches a Word flaw that could give an attacker the same privileges as the user on a compromised machine.
- MS13-44: is a Visio vulnerability that could lead to information disclosure if a user opens an infected Visio file.
- MS13-45: repairs a Windows Essentials vulnerability that could lead to information disclosure if a user opens Windows Writer using a malicious URL.
- MS13-46: is a privilege escalation vulnerability in Kernel-Mode Drivers that happens if an attacker logs onto a system with valid credentials and runs a malicious application.
Exploits for vulnerabilities in Adobe’s ColdFusion application server have been at the heart of a number of incidents this year, including a compromise of servers belonging to the Washington State Court system. This level of action has prompted Adobe to release five security updates for the software this year already, including hotfixes sent out today for two vulnerabilities being exploited in the wild.
Adobe, which for a few months has been synchronizing its monthly security updates with Microsoft’s, also released patches today for vulnerabilities in Adobe Reader and Flash Player; none of those flaws are actively being exploited.
It remains unclear which ColdFusion vulnerability was the center of the Washington State breach, though the court said in a statement there were breaches in February and March. An Associated Press report last week said the vulnerability exploited in the attack had already been patched.
The fixes released today address vulnerabilities in ColdFusion 10, 9.0.2, 9.0.1 and 9.0 for Windows, Mac and Unix. One vulnerability, CVE-2013-1389, enables remote code execution on a server running ColdFusion, while the other, CVE-2013-3336, allows unauthorized remote access to files stored on the server. It is this bug, Adobe said, that is currently being exploited.
Adobe also patched 13 memory corruption vulnerabilities in Flash Player that could cause the ubiquitous media player to crash and allow attackers to gain remote control over a compromised computer. Version 22.214.171.124 for Windows was given the most critical rating. Mac, Linux and Android patches were also released, as was a fix for Adobe AIR 126.96.36.1990.
The Adobe Reader bulletin patches 30 vulnerabilities in Reader and Acrobat 11.0.02 for Windows and Mac, and Reader 9.5.4 and earlier 9.x versions for Linux. The vulnerabilities involved include 18 memory corruption vulnerabilities that could lead to remote code execution. The remainder of the security updates resolve integer underflow, use-after-free, stack overflow, buffer overflow, integer overflow and information leakage vulnerabilities.
Unlike the Cold Fusion bugs, none of the Flash or Reader vulnerabilities have been spotted in the wild, Adobe said.
In the Washington State breach, hackers took advantage of an unpatched ColdFusion instance to grab as many as 160,000 Social Security numbers belonging to anyone booked into a city or county jail between September 2011 and December 2012. Driver’s license numbers belonging to up to one million Washington citizens may also have been accessed, the court said.
“The vast majority of the site contains non-confidential, public information. No personal financial information, such as bank account numbers or credit card numbers, is stored on the site,” they said in the statement. “However, other data stored on the server did include social security numbers, names, dates of birth, addresses, and driver license numbers that may have been accessed. Although there is no hard evidence confirming the information was in fact compromised, the data was still vulnerable and should be considered as potentially exposed.”
A news report says the beleaguered Bloomberg financial data and news service accidentally posted online more than 10,000 private messages between traders and clients at some of the world’s largest banks. The breaches, said to be part of a former employee’s data mining project, took place in 2009 and 2010.
The revelation, first reported by The Financial Times, will do little to restore public confidence in the company’s data security after its editor-in-chief had admitted just hours early on Monday that the news agency had allowed its journalists access to confidential client data since the 1990s.
“Our reporters should not have access to any data considered proprietary. I am sorry they did. The error is inexcusable,” wrote Matthew Winkler in an opinion piece on the Bloomberg Web site. “Last month, we immediately changed our policy so that reporters now have no greater access to information than our customers have. Removing this access will have no effect on Bloomberg news-gathering.”
The company is being investigated by a number of agencies, including the European Central Bank and U.S. Treasury and U.S. Federal Reserve, after senior executives at Goldman Sachs complained that a Hong Kong-based Bloomberg reporter had called to ask about a partner’s employment status after noticing the person hadn’t logged into a Bloomberg terminal for some time.
Winkler said the company’s reporters had limited access to data, including login histories and “high-level types of user functions on an aggregated basis, with no ability to look into specific security information.”
The company supplies financial terminals to traders, regulators and central bankers worldwide for about $20,000 annually. It reportedly has more than 315,000 terminal subscribers, who use the service to gather real-time data on markets and instant message each other.
On Friday, the CEO and president of Bloomberg LP, the parent company, posted on the Bloomberg Blog that reporters never accessed “trading, portfolio, monitor, blotter or other related systems or our clients’ messages.”
“Last month we changed our policy so that all reporters only have access to the same customer relationship data available to our clients,” wrote Daniel Doctoroff on Friday. “Additionally, we decided to further centralize our data security efforts by appointing one of our most senior executives to the new position of Client Data Compliance Officer. This executive is responsible for reviewing and, if necessary, enhancing protocols which among other things will continue to ensure that our news operations never have access to confidential customer data.”
The latest breach involving more than 10,000 messages was discovered by a Financial Times reporter doing a Google search. After the journalist contacted the company for comment on Monday, the confidential lists immediately removed from the Internet.
The private messages were part of a data-mining project being done with a client’s consent by an employee who is no longer with the company. They involved confidential exchanges between traders and their clients at dozens of the world’s largest banks and had been available for public consumption for several years.
New York City Mayor Michael Bloomberg, the majority owner of the financial information company, has not been involved in daily operations for a number of years, including since he took office in 2002. He has refused to comment on the privacy and security breaches, citing an agreement with the city’s Conflicts of Interest Board.
Blog: Microsoft Updates May 2013 - Slew of Internet Explorer Critical Vulnerabilities, Kernel EoP, and Others
Microsoft released a long list of updates for Microsoft software today. The most interesting appear to be those patching Internet Explorer and the kernel software vulnerabilities. In all, ten critical "use-after-free" vulnerabilities are patched in IE along with one important Information Disclosure vulnerability, and three elevation of privilege vulnerabilities are being patched as well. Almost all of these IE vulnerabilities were reported by external security researchers working through HP's Zero Day Initiative.
Facebook users are being warned of malicious Firefox and Chrome extensions that can give an attacker remote control over a Facebook profile.
Microsoft has seen an increase in activity around these extensions, in particular in Brazil. The threat is detected as Trojan:JS/Febipos.A and has been updated recently.
“This Trojan monitors a user to see if they are currently logged in to Facebook. It then attempts to get a configuration file from the website <removed>[,]info/sqlvarbr.php,” said Jonathan San Jose of the Microsoft Malware Protection Center. “The file includes a list of commands of what the browser extension will do.”
The malware can add posts to a profile, like pages, join groups or invite others to join groups, chat and comment on posts. So far, Microsoft said it has seen posts in Portuguese on hijacked profiles trying to get users to click on a link, purported to be a video about a bullying-related suicide. Facebook has already blocked the link as malicious.
The Trojan, meanwhile, acts as a dropper and opens backdoor connections. When the malware infects Chrome, it tries to connect to du-pont.info/updates/[removed]/BL-chromebrasil[.]crx, while on Firefox, the connection is to du-pont.info/updates/[removed]/BL-mozillabrasil[.]xpi. The malware then attempts to update itself from either of those domains.
The malware’s capabilities and messages it posts to entice other users to infect themselves depends on the configuration file downloaded to the malware, Microsoft said. One link Microsoft shared as an example had 2,746 Likes, had been shared 167 times and had 165 comments, indicating a notable number of potential victims. Within hours after the initial analysis, all of those numbers had risen.
“There may be more to this threat because it can change its messages, URLs, Facebook pages and other activity at any time,” Microsoft’s San Jose said.
IE users are not at risk, Microsoft added.
Google and Mozilla have recently added protections that address threats via browser extensions. Google, in December, announced that it would halt silent extensions in Chrome. These used to be done without permission via the Windows registry mechanism, a feature that allows the installation of extensions alongside other applications, enabling third parties to opt-in users without their permission.
Those are now disabled by default in Chrome and a dialog pops up explaining the effect of the extension on the browser and any potential risks. The new feature also automatically disables any extensions installed using external deployment options in the past as well.
Mozilla, meanwhile, added a click-to-play feature beginning with Firefox 17 in November that prevents users from running out of date or vulnerable plug-ins or extensions. The move was designed to block exploits targeting these older versions of plug-ins such as Adobe Flash and Reader.