Yet another commercial crimekit has been spotted making the rounds on the underground malware forums that uses the anonymity network Tor to stealthily communicate with its command and control servers.
While it isn’t the first of its kind to use Tor, the kit, nicknamed Atrax, is cheap and comes with a slew of capabilities including browser data extraction, Bitcoin mining and the capability to launch DDoS attacks.
Named after an Australian subfamily of spiders, Atrax runs for about $250 – Bitcoin only – making it one of the more relatively affordable kits available. Atrax comes with a few add-ons, including a plugin stealer ($110), an experimental add-on for coin mining ($140) and a form grabber ($300), according to Jonas Mønsted of the Danish security firm CSIS, who described the kit in depth in a blog entry earlier today.
While some of the add-ons, notably the form grabber, cost more than the actual kit, Atrax comes with free updates, support and bug fixes, perks that could catch an attacker’s eye.
In the Atrax rundown, Mønsted writes that “communication over TOR is already encrypted, so no extra communication encryption” is needed and that the kit doesn’t use “suspicious Windows APIs.”
The kit’s author claims Atrax’s size (1.2 MB) is due to “TOR integration and x64/x86 code.”
The plug-in stealer looks to have a wealth of functionality, boasting the ability to steal information from Chrome, Firefox, Safari, Internet Explorer and Opera browsers.
Atrax has opened its arms to the burgeoning world of Bitcoin as well as the kit’s author claims, it can steal information from users’ Bitcoin wallets (such as Armory, Bitcoin-Qt, Electrum and Multibit) and also mine for Bitcoin and a lesser known alternative, Litecoin.
While CSIS has yet to track down an active sample of the Atrax kit, it sounds like it should fit alongside other recently discovered botnets and malware tools that also rely on the Tor network to propagate.
Mevade, one of the more popular Tor-based botnets gained unwanted publicity when it shifted to the covert communication protocol at the end of this past summer. Tor saw a gigantic uptick in users, up to 2.5 million from 500,000 in August thanks to the botnet, something that got it detected but didn’t prove to be its complete undoing.
Activity stemming from MEvade was later spotted in September by Microsoft lending a hand to Sefnit, a long thought dead strain of malware that was revived after it found a new component to carry out click fraud.
Twitter took another step toward not only securing the privacy of its users’ communication over the social network, but in warding off the prying eyes of government surveillance with the implementation of Perfect Forward Secrecy. The technology thwarts the efforts of anyone who may be collecting Twitter traffic today with the hope of cracking the private key securing it tomorrow.
“At the end of the day, we are writing this not just to discuss an interesting piece of technology, but to present what we believe should be the new normal for web service owners,” said Twitter security engineer Jacob Hoffman-Andrews. “A year and a half ago, Twitter was first served completely over HTTPS. Since then, it has become clearer and clearer how important that step was to protecting our users’ privacy.”
Perfect Forward Secrecy ensures that private session keys securing an encrypted connection are random and if one is compromised, it cannot be used to compromise other messages.
“When an encrypted connection uses perfect forward secrecy, that means that the session keys the server generates are truly ephemeral, and even somebody with access to the secret key can’t later derive the relevant session key that would allow her to decrypt any particular HTTPS session,” wrote Parker Higgins, an activist with the Electronic Frontier Foundation. “So, intercepted encrypted data is protected from prying eyes long into the future, even if the website’s secret key is later compromised.”
While Yahoo and other laggards have either only recently deployed HTTPs across their web services or have yet to do so, Twitter extends its leadership among Internet companies. Twitter announced that forward secrecy has been enabled not only on twitter.com but on api.twitter.com and mobile.twitter.com. A recent EFF crypto report shows that Twitter is among a handful of major companies that deploys forward secrecy; others include Facebook, Dropbox, Google, Tumblr and SpiderOak.
Twitter encouraged other companies to implement not only HTTPS as the default, but harden it with HSTS, certificate pinning and forward secrecy.
“Security is an ever-changing world. Our work on deploying forward secrecy is just the latest way in which Twitter is trying to defend and protect the user’s voice in that world,” Twitter’s Hoffman-Andrews said.
Hoffman-Andrews explained in his blogpost that Twitter has enabled the EC Diffie-Hellman cipher suite to support forward secrecy.
“Under those cipher suites, the client and server manage to come up with a shared, random session key without ever sending the key across the network, even under encryption,” he said. “The server’s private key is only used to sign the key exchange, preventing man-in-the-middle attacks.”
The Snowden leaks have demonstrated that the NSA is adept at not only collecting phone call metadata, but practically any data it chooses, from email address books, to searches and other Internet traffic. HTTPS and other encryption offshoots put up hurdles for the NSA. Meanwhile, major web services providers such as Yahoo, which will only deploy HTTPS by default on its services at the start of the new year, don’t put up a barrier at all.
EFF staff attorney Seth Schoen told Threatpost that HTTPS—SSL and/or TLS encryption—is something that users should demand and developers should consider normal and standard with new applications. But, he cautioned, that HTTPS is a minimum standard of protection and that forward secrecy and HSTS, for example, should be considered as well.
Schoen said that enabling Perfect Forward Secrecy requires computational resources and additional costs, but he also said that those were some of the same arguments companies used as a counter to enabling HTTPS. However, Schoen said, computers are getting faster and there’s less of a CPU resource burden today than a half-dozen years ago.
“There’s been a lot of speculation about Moore’s Law and how long that curve will last,” Schoen said. “But as long as we are on the curve for the time being, cryptography that seemed so intensive may not be so if we look again. Five or six years ago, that might have seemed like a huge computational burden, but today that might not be because CPUs are a lot faster.”
Researchers have discovered a mature attack platform that’s enjoyed great success eluding detection and made good use of an exploit present in a number of espionage campaigns.
The attacks have concentrated largely on the automotive industry, hitting large companies primarily in Asia and only after being tested against activist targets in the region. Nicknamed Grand Theft Auto Panda by researcher Jon Gross of Cylance, the attacks rely on the well-worn exploits used against CVE-2012-0158. Malicious Microsoft Office documents are sent to the victim, who must interact with the .xls, .doc, or other file in a phishing email or website in order to exploit the vulnerability and inject malware or cause a service disruption.
These attacks are not carried out on the same scale as those by the Comment Crew or other high profile APT gangs. Specific targets are chosen in these campaigns, and those targets are phished with convincing messaging, such as a negative customer service review as in one attack spotted by Cylance.
The platform has been around for a few years and can be used to steal not only system and network information, but documents and credentials, in addition to opening a backdoor connection to the attacker in order to move stolen data.
“It’s more of an extensible platform to where they can add in any functionality they want as a plug-in. It’s more of an infection framework than any specific Trojan,” Gross said. “They can modify the components over time and not have to really worry about it if the main component is never detected. This is more like extensible platform where they add in functionality, screen capture, key logging, they just send it up as a plug in.”
CVE-2012-0158, meanwhile, has been a favorite among nation-state attackers seeking to infiltrate corporations or activist groups for espionage or surveillance. It was detected in the Icefog and NetTraveler campaigns discovered by Kaspersky Lab. Both were linked to operatives in China and follow similar patterns as GTA Panda in that that they’re attacking both activists and manufacturing companies.
“We see a lot people who are attacking industries, also attacking human rights groups. We’ve always thought it just comes down as a directive from whomever to test this against them,” Gross said. “We see a lot of new malware tested against human rights activists before it ever makes its way to the corporate environments. The original stuff I found was not targeted against human rights, but as I dug into it, I saw more and more stuff that was also additionally targeting human rights; and that was older stuff before they moved on to corporations.”
NetTraveler, for example, made use of the CVE-2012-0158 Office exploits to target the Uyghur and Tibetan activists, before moving on to oil and energy companies as well as diplomats and government agencies around the world.
“It’s kinda like a Darwinian evolution of malware. If it passes the first test, it’s survival of the fittest. The things that don’t get detected get reused,” Gross said. “Human rights are almost like a playground. They’re always a target, and we see a lot of malware that’s used against them before anyone else.”
As for the platform, its staying power is due to its stealth.
“The big thing is moving functionality out of the actual files that get loaded into [victims’ machines] because then it doesn’t look suspicious until that file subsequently loads something else that performs the malicious activity,” Gross said. “The malicious components are sitting there encrypted on disk, where your typical security product is not going to find that unless they already know about it.”
There are also layers of encryption protecting the attack that shield it from detection, Gross said. As for the exploits, lax patching is likely the biggest culprit; in this case, CVE-2012-0158 was patched more than 18 months ago by Microsoft. Combine that with effective social engineering in the phishing messaging—in particular from spoofed, trusted email addresses—and that’s a potent cocktail for trouble.
“If you get emails that look like they’re coming from trusted parties and people you usually communicate with, then our guard drops and we’re much more likely to say OK, I’ll open that,” Gross said. “I think they rely on that really heavily, especially with the activist community because they know all these people and they know who they communicate with on a regular basis and they try to make it look like it comes from them. Their guard’s totally down and they’re not worried about it.”
A recent set of Google patches included a fix for a serious Gmail account recovery vulnerability, the details of which have been disclosed.
Researcher Oren Hafifi of Israel points out in his disclosure that unlocking a Google password opens the door to much more than email, elevating the risk.
“Did you ever stop and ask what does GMAIL stand for? It’s the Global Main Authentication and Identification Library. Seriously, if someone got access to your Gmail account, he can ‘password recover’ his way to any other web/mobile application out there,” Hafifi wrote on his blog.
Hafifi combined cross-site scripting, cross-site request forgery, and password flow bypass to pull off this hack.
The attack starts with a spoofed Google phishing email sent to a Gmail user. Hafifi explains the gory details in his blog about the ins and outs of why his attack works, but essentially the phishing email must be customized with the victim’s email address in the URL.
The link, however, should refer to the attacker’s site where a cross-site request forgery (CSRF) is requested. Next a cross-site scripting attack launches and the user is presented with a phony password reset option.
“The user clicks ‘Reset Password’ and from here, the sky is the limit,” Hafifi wrote.
Once the user tries to reset their password and recover their account, the attacker is in the background receiving the new password and cookie information.
Hafifi said Google patched the vulnerability within 10 days and he is in line to receive a bug bounty and another Hall of Fame recognition from Google.
Encryption, once a tool used mainly by security professionals, activists and others with reason to suspect their communications may be at risk, has been moving ever deeper into the mainstream in recent months. Now, Microsoft is planning to roll out a new encrypted email service on its Office 365 site that will make sending and receiving secure email much simpler.
The new service, known as Office 365 Message Encryption, is designed to simplify the process of using encrypted email, something that hasn’t been as easy as most users would like. Setting up and using many secure email applications can be an arduous and confusing process, particularly for users who may not be familiar with security. Microsoft’s new service, which will be available in the first quarter of 2014, uses a system that’s somewhat similar to other secure email systems, wherein a user receives an email with an encrypted attachment and instructions for opening it.
“No matter what the destination-Outlook.com, Yahoo, Gmail, Exchange Server, Lotus Notes, GroupWise, Squirrel Mail, you name it-you can send sensitive business communications with an additional level of protection against unauthorized access. There are many business situations where this type of encryption is essential,” Microsoft’s Shobhit Sahay said in a blog post explaining the new service.
“When an external recipient receives an encrypted message from your company, they see an encrypted attachment and an instruction to view the encrypted message. You can open the attachment right from your inbox, and the attachment opens in a new browser window. To view the message, you just follow the simple instructions for authenticating via your Office 365 ID or Microsoft Account.”
Since the start of the summer, when the Edward Snowden NSA leaks began, encrypted communications have become a hot topic in the security and privacy communities, as well as in the wider user community. The secure email service reportedly used by Snowden, Lavabit, shut down in August, as did the Silent Mail system run by Silent Circle, both moves coming on the heels of government demands for Lavabit’s SSL keys.
Microsoft’s new service isn’t really the same kind of system as those, but it’s meant to help businesses secure their sensitive communications through the use of a variety of encryption schemes. When the data is at rest in Microsoft’s data center, it will be protected by BitLocker. The connection between the client and the Office 365 servers is protected by SSL ad the messages will be encrypted and signed using S/MIME.
The system will use a simple Web interface for administration, and enterprise administrators have the ability to set up riles that determine which emails will be encrypted.
“The Message Encryption interface, based on Outlook Web App, is modern and easy to navigate. You can easily find information and perform quick tasks such as reply, forward, insert, attach, and so on. As an added measure of protection, when the receiver replies to the sender of the encrypted message or forwards the message, those emails are also encrypted,” Sahay said.
Image from Flickr photos of FutUndBeidl.
Dennis Fisher and Mike Mimoso discuss the major security stories of the last two weeks, including the BGP route hijacking, why Do Not Track doesn’t work and the We Are the Cavalry movement.http://threatpost.com/files/2013/11/digital_underground_135.mp3
Debian has released patches for a pair of security vulnerabilities in the free operating system, including a security bypass flaw in the Nginx Web server. The other vulnerability lies in a Perl module used in the OS.
The vulnerability in the HTTP: :Body Perl module could allow an attacker to run arbitrary commands on a vulnerable Debian server.
“The HTTP body multipart parser creates temporary files which preserve the suffix of the uploaded file. An attacker able to upload files to a service that uses HTTP::Body::Multipart could potentially execute commands on the server if these temporary filenames are used in subsequent commands without further checks. This update restricts the possible suffixes used for the created temporary files,” the Debian advisory says.
The second vulnerability is a bug in the Nginx Web server that enables an attacker to bypass the security restrictions in Debian. Found by Ivan Fratric of the Google security team, the vulnerability is a serious one. It “might allow an attacker to bypass security restrictions by using a specially crafted request,” Debian said in its advisory.
Users running vulnerable versions of Debian are encouraged to upgrade as soon as possible.
Stuxnet was a two-headed beast as it turns out, one that could have laid waste to the Natanz nuclear facility which it infected, and one that should have, by expert accounts, remained undetected if not for the noisier yet less complex second attack routine that is now familiar to the world.
Industrial control system and SCADA expert Ralph Langner wrote an article for Foreign Policy magazine and a paper on his website published this week that throws back the covers on an older, more complex and stealthier version of the malware, which is markedly different from the second attack routine, which emerged in 2010..
“It turns out that it was far more dangerous than the cyberweapon that is now lodged in the public’s imagination,” Langner wrote. “Without that later and much simpler version, the original Stuxnet might still today sleep in the archives of antivirus researchers, unidentified as one of the most aggressive cyberweapons in history.”
Langner said the older, lesser known Stuxnet—put in place in 2007—targeted the protection systems around cascades of centrifuges used to enrich uranium at the plant. The attackers were keenly aware of weaknesses in plant design and process execution. They knew the Iranians were content in accepting a percentage of faulty centrifuges because they had designed a protection system that enabled enrichment to continue amidst the breaking centrifuges, Langner said.
“The system might have keep Natanz’s centrifuges spinning, but it also opened them up to a cyberattack that is so far-out, it leads one to wonder whether its creators might have been on drugs,” Langner wrote.
Ingeniously, the malware had the capability of recording 21 seconds of activity from the protection system’s sensors, showing a healthy stream of activity. That 21 seconds was looped over and over on monitoring screens while the attack was executed. Engineers thought they were watching an enrichment process hum along as designed that instead was spinning out of control. The malware attacked industrial controllers built by Siemens, closing crucial valves causing pressure to go up, gases to collect, and centrifuges to figuratively blow up.
The attackers, Langner said, could have let them literally blow up, causing catastrophic destruction. They didn’t, keeping their cover as a result, he said. “The implementation of the attack with its extremely close monitoring of pressures and centrifuge status suggests that the attackers instead took great care to avoid catastrophic damage,” Langner wrote.
Langner’s analysis of the attack called it over-engineered for the task and that any slip-up would have risked detection by the Iranians. Two years after the first Stuxnet was in place, in 2009, the second phase was introduced.
The variant attacked another process control system that controlled rotor speeds in the centrifuges and was a self-replicating worm that moved within the plant’s network and on portable USB drives; the older version, Langner said, was deliberately installed on plant computers, likely by an agent of the attackers.
“This suggests that the attackers had lost the capability to transport the malware to its destination by directly infecting the systems of authorized personnel, or that the centrifuge drive system was installed and configured by other parties to which direct access was not possible,” Langner wrote.
This version of Stuxnet has been well documented, from its use of a number of Windows zero-day exploits and malware signed with stolen Microsoft digital certificates. Langner said this version of Stuxnet was written by hackers, skilled in writing malicious code, while the first attack was coded alongside experts adept in industrial control systems, not IT. Langner points a finger at the National Security Agency as the authors of Stuxnet, calling it the only logical location for its development.
This version and approach to attacking the Iranians’ nuclear capabilities left fingerprints—strange behavior in the industrial processes that could, and would, be detected. And while the attackers could have caused catastrophic destruction at any time, Langner estimates they instead set the country’s nuclear program back by only two years.
“The attackers were in a position where they could have broken the victim’s neck, but they chose continuous periodical choking instead,” Langner wrote. “Stuxnet is a low-yield weapon with the overall intention of reducing the lifetime of Iran’s centrifuges and making the Iranians’ fancy control systems appear beyond their understanding.”
Langner also speculates that Stuxnet was not built to escape beyond Natanz’s walls, yet it did, likely through contractors who worked at the plant leaving with laptops infected with Stuxnet and plugging them in at other industrial facilities where they were contracted. Stuxnet was designed to spread only on local networks, or via portable drives, Langner said.
He also wrote that it was likely the attackers’ intention to allow Stuxnet to spread since the malware reports IP addresses and hostnames of infected systems to a command infrastructure. The attackers could monitor the movement of contractors, likely in the hopes of spotting other nuclear facilities in Iran operating under the radar, he wrote.
The danger too is that future weaponized attacks such as Stuxnet can follow this same path into a facility because, as Langner put it, contractors are good at engineering tasks but lousy at cybersecurity and could be unwitting pawns in deploying another such weapon at any time.
Langner estimates that 50 percent of the investment into Stuxnet was put into hiding the attack; future attacks may not require the same kind of investment, and therefore may not need the resources of a nation-state such as Stuxnet did, Langner wrote.
“And unlike the Stuxnet attackers, these adversaries are also much more likely to go after civilian critical infrastructure. Not only are these systems more accessible, but they’re standardized,” he wrote.
Ultimately, Langner said, Stuxnet may have served two purposes: 1) disrupt the Iranian nuclear program; and 2) allow the attacker to flex its cyberweaponry muscle.
“Operation Olympic Games started as an experiment with an unpredictable outcome. Along the road, one result became clear: Digital weapons work. And different from their analog counterparts, they don’t put military forces in harm’s way, they produce less collateral damage, they can be deployed stealthily, and they are dirt cheap,” Langner wrote. “The contents of this Pandora’s box have implications much beyond Iran; they have made analog warfare look low-tech, brutal, and so 20th century.”
Dennis Fisher talks with several members of the We Are the Cavalry project, including Josh Corman, Robert Hansen, Space Rogue and John Dickson, about the movement’s origins, its goals to promote research on topics such as medical device security and how to help change the perception of security research.http://threatpost.com/files/2013/11/digital_underground_134.mp3
NEW YORK–The term “best practices” is high on the list of overused and nearly meaningless phrases that get thrown around in the security field. It forms the basis for regulations such as HIPAA and PCI DSS and yet if you asked a random sample of 10 security people what the phrase meant, you’d likely get 10 different answers. But what if there aren’t actually any best practices?
“I think there are no best practices, just things that work for you in the right scenario,” Jeremiah Grossman, CTO of WhiteHat Security, said in an interview at the OWASP AppSec USA conference here Thursday. “What’s important is trying to ascertain what those are.”
The process of discovering what works in security has traditionally been one of trial and error. Insert Shiny Defensive Technology A to protect Vulnerable Slot B, then sit back and see what happens. If, or when, it fails, you replace it with a new technology and see whether that works any better. But Grossman said that he’s seen a shift in recent years away from that kind of process and toward a more empirical one.
“It’s metrics-driven. So, suppose you have a Web site that you just put up and it’s full of bugs and when they’re found, they’re fixed fast,” Grossman said. “That tells you that you probably have a QA problem. If you have another site that has just a few bugs but when you try to get them fixed it takes forever or it doesn’t happen at all. That could tell you that your developers need training. Maybe the don’t understand what cross-site scripting, so they need some education on that. It’s about which one works for you in which scenario.”
The movement toward a more numbers-driven approach has helped organizations get a better handle on what’s working in their security programs, Grossman said, and gives them actual evidence to back up their assertions.
“How do things get to be best practices? Because some expert like me or someone else said so,” he said. “I absolutely think things are getting better. Overall, the Web is more secure, measurably more secure. But at the same time, the attackers are getting better and more organized. If you’re a target of opportunity, you just have to be better than average. But if you’re a target of choice, you better be really good at detection and incident response.”
The ESEA League, an online competitive gaming community, has decided to settle with the state of New Jersey after the acting attorney general there alleged that the gaming community operator infected user-machines with malware designed to mine Bitcoins.
The league is owned and managed by E-Sports Entertainment, LLC, and is known for its strict anti-cheating policy, which is supported by a “industry leading anti-cheat client” that users are required to download.
In a blogpost linked to on ESEA’s homepage, the community’s cofounder, Eric Thunberg makes clear that ESEA’s decision to settle is not a concession to – and in fact the company disagrees with – the New Jersey attorney general’s account of the Bitcoin incident.
Bitcoin mining is a process through which Bitcoin users generate “blocks” in order to keep track of and legitimize Bitcoin transactions on the digital crypto-currency’s public ledger, the BlockChain. Generating a new block is tantamount to solving a complicated math problem because each new block must contain within it the record of the previous block, and thus the entire record of every Bitcoin transaction ever. Because the process is resource-intensive, the creators of new blocks are rewarded with new Bitcoins.
The attorney general alleged that an ESEA employee or employees infected thousands of personal computers with malware that enabled E-Sports to monitor what programs subscribers were running and illegally perform Bitcoin mining. ESEA allegedly bundled this malware along with its anti-cheating software package.
More specifically, New Jersey charged that E-Sports created and deployed malicious software that enabled the company to monitor the computers of their users, even when those users were not signed into the ESEA League. ESEA, the state further alleges, also created a botnet operating on the computational resources of its users. The purpose of this Botnet was to pool computer power from the ESEA League’s user-machines in order to mine Bitcoins.
Over a random two-week period of time, the state estimated that E-Sports hijacked more than 14,000 computers, and accrued some $3,500 mining Bitcoins.
“This is an important settlement for New Jersey consumers,” said acting Attorney General John J. Hoffman. “These defendants illegally hijacked thousands of people’s personal computers without their knowledge or consent, and in doing so gained the ability to monitor their activities, mine for virtual currency that had real dollar value, and otherwise invade and damage their computers.
The settlement requires that E-Sports pay the state $325,000 of the suspended $1 million penalty. In addition, the company as agreed to refrain from deploying software code that downloads to consumers’ computers without their knowledge and authorization, commit itself to a 10-year compliance program, and create a dedicated page on its website explaining the specific data it collects, the manner in which it is collected, and how the information is used. If the company fails to adhere to any of this over the next decade, it will be forced to pay the remaining $675,000.
Also named by the state is E-Sports software engineer Sean Hunczak, whom the state claims worked with Thunberg to develop the Bitcoin mining malware that used subscriber’s graphics processing units silently mine Bitcoins.
Thunberg is adamant that he and his company are guilty of nothing.
“The settlement that was signed makes explicitly clear that we do not agree, nor do we admit, to any of the State of New Jersey’s allegations,” Thunberg wrote on the ESEA website. “The press release issued by the Attorney General about our settlement represents a deep misunderstanding of the facts of the case, the nature of our business, and the technology in question.”
Thunberg goes on to write that the employee responsible for the Bitcoin incident “was terminated,” though it is not clear who that employee is, and Threatpost was unable to contact Thunberg for comment.
NEW YORK–A small group of influential security researchers and executives are putting together a grass-roots movement to encourage more research on the emerging breed of connected and potentially vulnerable devices such as pacemakers, insulin pumps and others and help educate users about the security and privacy issues they raise.
The effort is meant to help focus security researchers on the new problem set presented by the rise of the so-called Internet of Things, the emerging network of non-PC devices. These devices, including medical devices, appliances and cars, have largely gone unexamined by security researchers until very recently. Some researchers, such as Charlie Miller and Chris Valasek, have looked at the security issues with the on-board computers in some cars, and there has been some notable research on medical devices, as well. But compared to the volume of work that’s been done on desktop or mobile software, it’s miniscule.
Those in the security community are aware of the potential problems with these devices, of course, but the consumers who use them have little idea of the dangers that an exploitable security bug in something like a pacemaker or car’s computer could present. Josh Corman, director of security intelligence at Akamai, and Nick Percoco, director at KPMG, are trying to change that by imploring security researchers to work on this new set of challenges rather than hammering away at problems that are already well understood.
“We’re facing a different kind of ocean with apex predators. We’re becoming more and more entangled with insecure and indefensible technologies,” Corman said during a talk at the OWASP AppSec USA conference here. “Let’s do security that matters, not just our day jobs. The outside world is part of the solution set. This is security for the public good.”
In some cases, research into security problems with medical devices or cars or other such non-PC devices has been dismissed as stunt hacking because it doesn’t have the immediate effect of finding a bug on iOS or Google Chrome. And Corman and Percoco said they’re well aware that some in the security community will criticize their effort. But that’s beside the point, they said.
“This is about doing research on things that matter rather than on things that frankly don’t matter,” Percoco said. “Today everything is connected, everything is Internet-enabled and the importance of this stuff is growing. If someone with a pacemaker dies, is someone doing forensics on the pacemaker? How are we going to know as a society that these things have flaws?”
The new movement, which is being called We are the Cavalry, got its start at DEF CON this summer, and Corman said it already has attracted a diverse group of researchers, hackers, executives and others with an interest in moving the project forward.
One goal, he said, is to educate the general public about the serious security issues that are likely to arise as more and more devices come online with minimal, if any, security testing.
“The half lives of these things are twenty, thirty, forty years. Even if we just didn’t know better for the last industrial control system software that went out last year, there’s another one going out this year. The question becomes, can we make better risk decisions if we have more information?” Corman said. “Hacking is a new form of power and it’s available to anyone. It’s just to easy to exert your will on other people.”
The Mevade botnet made news when it was found to be using the Tor anonymity network to communicate with its command and control infrastructure. Running C&C on Tor, however, turned out to be a fatal mistake when Tor usage spiked alerting administrators to the unusual activity.
A group of Russian criminals apparently were paying attention to what happened to Mevade and are using a different darknet called I2P, or Invisible Internet Project, as a communication protocol for new financial malware called i2Ninja.
Researchers at Trusteer monitoring a Russian malware forum spotted i2Ninja, which seems to be run-of-the-mill financial malware that includes HTTP injection capabilities, email , FTP and form grabbers. The twist on this one is that it uses I2P to send stolen credentials back to the attackers, and it promotes 24/7 support as a differentiator.
“It offers to whoever buys the malware, in the command and control itself, a direct line of sight with the authors and the support team,” said Etay Maor, fraud prevention solutions manager at Trusteer.” In the control panel, they offer 24/7 support implemented through the I2P protocol. “
Providing support through the command and control panel is new as well; generally support is arranged through an underground forum or support site.
“I2P is similar to Tor in that it’s a darknet, but it’s actually considered more secure by criminals,” Maor said. “This is the first time I’ve seen malware operating over I2P and the first time I’ve seen it offering 24/7 support from the C&C. This means they have a lot of confidence in the security of the protocol.”
It’s unknown whether the support is automated through C&C or whether there is a live person communicating with an attacker. Other malware such as Citadel and the Neosploit Exploit Pack have marketed support; Neosploit even offered tiered support.
“[Support] is super important. I remember when the Zeus source code was leaked and people started developing their own malware, the chats in the different underground forums was about who they were going to get support from,” Maor said. “The people who buy this malware may be looking to make money, but they may not be super technical like they used to be in the past. Six or seven years ago, the people who wrote the code were the people who were operating the malware. It’s not the case today. Today you have a buyer’s market and a seller’s market. When you buy a product today you expect support to come with it and if you have questions, you expect someone to talk to.”
As for the I2P darknet, much like Tor, it’s favored by individuals who prefer or require anonymity online. Individuals in oppressed regions, journalists, activists and even health care and legal professionals who require private, secure communications with clients use services such as Tor to get the job done. These networks, however, also attract criminals such as the Mevade botmasters and the Silk Road gang who also operated over Tor until the FBI took down the underground drug market in early October.
I2P, meanwhile, operates unlike Tor in that it’s a peer-to-peer protocol, and computers on the network communicate via the proxy client between themselves using encrypted messages.
“It’s not like [Tor] where you’re browsing the Internet safely; this is a true darknet, a network you cannot reach,” Maor said. “You cannot Google it. You cannot find it. It’s its own protocol laying on top of HTTP.”
The use of I2P also serves to keep the malware fairly safe from law enforcement and rival gangs, Maor said. Governments interested in surveillance have had limited success, for example, in breaking Tor and watching users’ communication over that network through the spy agency’s FoxAcid program and Quantam servers.
“From what I gather from different forums I participate in, I2P is considered even more secure than Tor,” Maor said. “I2P is still considered a true darknet, something that’s not currently compromised, or no one knows if it’s been compromised. That’s a good enough reason for them to use this.”
NEW YORK–The movement in the security and privacy communities to push the Do Not Track standard as an answer to the problem of pervasive online tracking by ad companies and other entities has resulted in the major browser vendors including DNT as an option for users, giving them a method for telling advertisers and Web sites their preferences on tracking. But DNT may well have outlived its usefulness and needs to be replaced by something that’s more effective and efficient, security experts say.
DNT was conceived as a way for users to communicate their preferences on Web and ad tracking to the sites that they visit. The major browsers, including Internet Explorer, Firefox and Chrome, all have an option that allows users to enable DNT, which essentially sends an HTTP header to sites the users visit telling them whether the users consent to tracking. Advertisers and Web site owners rely on tracking to help them determine user preferences and behaviors and see where users are coming from and going to after leaving their sites. The Federal Trade Commission has pushed DNT as a privacy protecting technology and something that helps consumers defend against unwanted tracking of their online activities.
However, some security experts have begun to question the efficacy of DNT and say that it may be giving users the false impression that they’re completely safe from tracking.
“We need something more substantial that actually works and doesn’t impinge on people’s privacy. This Do Not Track thing is kid of a hot mess,” said Robert Hansen, a senior product manager at WhiteHat Security, in a talk at the OWASP AppSec USA conference here Wednesday. “We believe in opting everyone into security instead of out of it.”
One issue with DNT is that the online ad groups do not support it, and it’s left up to each individual site owner to decide how to deal with the signal from users and whether to honor it. There also are ways around the DNT system, and advertisers and site owners can use other means to track users. Hansen said that users should have a better option for preventing tracking than a voluntary system that many sites and advertisers ignore.
“We’d like to see ‘can not track’ rather than Do Not Track,” he said.
Another problem is that the major browser vendors implement DNT in different ways and have no incentives to actually block the ads that contain the code that tracks users. Microsoft, Mozilla and Google all partner with advertisers, which generates large amounts of revenue for all of them. Google, for example, is expected to earn nearly $40 billion in online ad revenue in 2013.
WhiteHat has released its own browser, Aviator, which is based on Chromium and uses an extension called Disconnect that disables Web site tracking and enables private search. The extension breaks the connections to third parties, preventing them from getting any data from users’ browsers.
DNT at this point appears to be dead, Hansen said, and there is a need for something more effective and useful for consumers.
“All the players came out looking good, because they can say that they supported it,” he said. “I firmly believe it was just a head fake by the online ad industry to buy time.”
A complete bundle of personal information hackers require to steal identities is available on the underground for as little as $25.
The data, known as Fullz in underground parlance, includes name, address, phone number, date of birth, Social Security or EIN numbers, email address with password and possibly bank account or payment card information with credentials. The information has slightly more value if you are from Europe, the United Kingdom, Canada, Australia or Asia, pushing the price up to around $40.
These facts and many more are among the findings of a report aptly titled, “The Underground Hacking Economy is Alive and Well.” Published by Dell, the report, orchestrated by Joe Stewart, director of malware research for SecureWorks’ Counter Threat Unit (CTU), and independent researcher David Shear, investigates the online marketplace for stolen data, paying particular attention to what is being sold, and for what cost.
As if the $40 Fullz price tag isn’t deflating enough, the going rate for the username-password combination for a bank account with between $75,000 and $150,000 is $300 or less, depending on which bank. For the most part, the report did not show a significant rise or fall in prices for stolen data. However, the cost for Fullz and online bank account credentials did drop slightly.
“In 2011, the CTU saw hackers selling US bank account credentials with balances of $7,000 for $300,” wrote Dell’s Elizabeth Clarke on SecureWorks’ website. “Now, we see accounts with balances ranging from $70,000 to $150,000 go for $300 and less, depending on the banking institution where the account is located. In 2011, we also saw hackers selling Fullz for anywhere from $40 to $60, depending on the victim’s country of residence. Fullz are now selling between $25 and only go up to $40, depending on the victim’s location.”
The report also examined the cost for other hacking services such as DDoS attacks, exploit kits, and bundles of malware-infected machines (bots). Hacking into a website for example, would cost you somewhere between $100 and $300, depending on the site and the reputation of the hacker-for-hire. The cost for doxing – gathering the information that constitutes Fullz–is between $25 and $100.
With bots, buying in bulk saves. A bundle of 1,000 zombie machines costs $20, while 5,000 costs $90, 10,000 costs $160, and 15,000 costs $250.
“Infected computers in Asia tend to sell for less,” Clarke wrote. “It is thought that infected computers in the U.S. are probably more valuable than those in Asia, because they have a faster and more reliable Internet connection.”
Exploit kits are expensive. Stewart and Shear discovered an array of remote access Trojans selling for anywhere between $50 and $250, mostly advertised as “fully undetectable” or – coincidentally – FUD, meaning the kit would not be detected by antivirus products. These products could also have additional costs depending on how much work with command and control servers and configuration the buyer was interested in doing.
Buyers could reportedly rent the Sweet Orange Exploit Kit for $450 per week or $1800 per month, which is more expensive than the Blackhole Kit, which went for $700 per three months, $1,000 for six months, and $1,500 per year.
There’s nothing like a little peer pressure to nudge someone toward doing the right thing.
That’s the philosophy behind the Electronic Frontier Foundation’s Encrypt the Web Report, which examines the encryption capabilities of 18 leading Internet companies, including large carriers, social networks, technology companies and Web-based service providers.
“We want to use this as a positive encouragement where if companies see other folks getting good reports, they may want to apply more crypto,” said Kurt Opsahl, a senior staff attorney with the EFF.
These same companies were also surveyed as part of the EFF’s Who Has Your Back report in May. That report evaluated the companies’ efforts around privacy, protection of user data and transparency with regard to government requests for user data.
For the Encrypt the Web report, each company was sent a survey, though not all replied; other sources were also considered including the companies’ websites and news reports. The companies were asked whether they support HTTPS, HSTS, Forward Secrecy, STARTTLS, and whether they encrypt data center links.
The latter query takes on particular importance following the disclosure of the National Security Agency’s MUSCULAR program, which revealed that the spy agency was tapping unencrypted links between data centers in order to siphon data on users’ Internet activities.
“One of the reasons for doing this was to find out about that category,” Opsahl said, adding that the complexity of encrypting those data center links varies between organizations dependent on the size of their operation, how data is transferred and the number of data centers they support.
Of the 18, only Dropbox, Google, Sonic.net, SpiderOak, Twitter and Yahoo said they do encrypt links between data centers. Microsoft was the lone company to concede it did not, while the EFF was unable to determine either way for the remaining companies.
Dropbox, Google, Sonic.net and SpiderOak were the only companies to score a checkmark in all five categories.
“They understand their customers want privacy and security, and are willing to deploy additional measures to ensure crypto is in place against a wide variety of attack vectors,” Opsahl said. “This helps their customers feel more secure about their data.”
Most on the list support HTTPS, although Amazon and Tumblr do so in a limited fashion. Fewer than half support HSTS and even few still support STARTTLS, which the EFF says is especially important for email service providers. STARTTLS encrypts communication between email servers over SMTP; if both providers use the protocol, the message is encrypted, if one does not, it is sent in clear text.
“We have asked for email service providers to implement STARTTLS for email transfer,” the EFF said in a blogpost. “It’s critical to get as many email service providers as possible to implement the system.”
Perhaps of more criticality are the number of large service providers that did not score so well in the report. Amazon, Apple and Tumblr earned one checkmark between them (Apple’s iCloud for its support of HTTPS). Carriers AT&T, Comcast and Verizon earned zero checkmarks between them; AT&T and Verizon have a history of cooperation with the government on surveillance issues, Opsahl said. The EFF and AT&T were embroiled in a lawsuit over the carrier’s alleged cooperation with the NSA’s spying program that was eventually settled when Congress gave AT&T retroactive immunity.
“We’re still concerned by their cooperation,” Opsahl said.
Regardless of encryption deployments, sometimes companies, such as Lavabit, have not been able to overcome government surveillance. Lavabit is alleged to have been Edward Snowden’s secure email provider; rather than turn over its decryption keys to the government, Lavabit shut its doors and went out of business. Silent Circle soon thereafter shuttered its secure email service, Silent Mail, before it too would be compelled to turn over its keys to the government.
In the meantime, the EFF hopes the crypto scorecard will nudge more Internet companies toward deploying encryption across the board.
“For the ‘Who Has Your Back’ report, it has worked well with companies interested in getting a good report. We’ve been able to add stars to several companies over time,” Opsahl said. “The idea is to encourage companies to have a race for the top and be able to show customers they are dedicated to providing quality security.”