Encryption, once a tool used mainly by security professionals, activists and others with reason to suspect their communications may be at risk, has been moving ever deeper into the mainstream in recent months. Now, Microsoft is planning to roll out a new encrypted email service on its Office 365 site that will make sending and receiving secure email much simpler.
The new service, known as Office 365 Message Encryption, is designed to simplify the process of using encrypted email, something that hasn’t been as easy as most users would like. Setting up and using many secure email applications can be an arduous and confusing process, particularly for users who may not be familiar with security. Microsoft’s new service, which will be available in the first quarter of 2014, uses a system that’s somewhat similar to other secure email systems, wherein a user receives an email with an encrypted attachment and instructions for opening it.
“No matter what the destination-Outlook.com, Yahoo, Gmail, Exchange Server, Lotus Notes, GroupWise, Squirrel Mail, you name it-you can send sensitive business communications with an additional level of protection against unauthorized access. There are many business situations where this type of encryption is essential,” Microsoft’s Shobhit Sahay said in a blog post explaining the new service.
“When an external recipient receives an encrypted message from your company, they see an encrypted attachment and an instruction to view the encrypted message. You can open the attachment right from your inbox, and the attachment opens in a new browser window. To view the message, you just follow the simple instructions for authenticating via your Office 365 ID or Microsoft Account.”
Since the start of the summer, when the Edward Snowden NSA leaks began, encrypted communications have become a hot topic in the security and privacy communities, as well as in the wider user community. The secure email service reportedly used by Snowden, Lavabit, shut down in August, as did the Silent Mail system run by Silent Circle, both moves coming on the heels of government demands for Lavabit’s SSL keys.
Microsoft’s new service isn’t really the same kind of system as those, but it’s meant to help businesses secure their sensitive communications through the use of a variety of encryption schemes. When the data is at rest in Microsoft’s data center, it will be protected by BitLocker. The connection between the client and the Office 365 servers is protected by SSL ad the messages will be encrypted and signed using S/MIME.
The system will use a simple Web interface for administration, and enterprise administrators have the ability to set up riles that determine which emails will be encrypted.
“The Message Encryption interface, based on Outlook Web App, is modern and easy to navigate. You can easily find information and perform quick tasks such as reply, forward, insert, attach, and so on. As an added measure of protection, when the receiver replies to the sender of the encrypted message or forwards the message, those emails are also encrypted,” Sahay said.
Image from Flickr photos of FutUndBeidl.
Dennis Fisher and Mike Mimoso discuss the major security stories of the last two weeks, including the BGP route hijacking, why Do Not Track doesn’t work and the We Are the Cavalry movement.http://threatpost.com/files/2013/11/digital_underground_135.mp3
Debian has released patches for a pair of security vulnerabilities in the free operating system, including a security bypass flaw in the Nginx Web server. The other vulnerability lies in a Perl module used in the OS.
The vulnerability in the HTTP: :Body Perl module could allow an attacker to run arbitrary commands on a vulnerable Debian server.
“The HTTP body multipart parser creates temporary files which preserve the suffix of the uploaded file. An attacker able to upload files to a service that uses HTTP::Body::Multipart could potentially execute commands on the server if these temporary filenames are used in subsequent commands without further checks. This update restricts the possible suffixes used for the created temporary files,” the Debian advisory says.
The second vulnerability is a bug in the Nginx Web server that enables an attacker to bypass the security restrictions in Debian. Found by Ivan Fratric of the Google security team, the vulnerability is a serious one. It “might allow an attacker to bypass security restrictions by using a specially crafted request,” Debian said in its advisory.
Users running vulnerable versions of Debian are encouraged to upgrade as soon as possible.
Stuxnet was a two-headed beast as it turns out, one that could have laid waste to the Natanz nuclear facility which it infected, and one that should have, by expert accounts, remained undetected if not for the noisier yet less complex second attack routine that is now familiar to the world.
Industrial control system and SCADA expert Ralph Langner wrote an article for Foreign Policy magazine and a paper on his website published this week that throws back the covers on an older, more complex and stealthier version of the malware, which is markedly different from the second attack routine, which emerged in 2010..
“It turns out that it was far more dangerous than the cyberweapon that is now lodged in the public’s imagination,” Langner wrote. “Without that later and much simpler version, the original Stuxnet might still today sleep in the archives of antivirus researchers, unidentified as one of the most aggressive cyberweapons in history.”
Langner said the older, lesser known Stuxnet—put in place in 2007—targeted the protection systems around cascades of centrifuges used to enrich uranium at the plant. The attackers were keenly aware of weaknesses in plant design and process execution. They knew the Iranians were content in accepting a percentage of faulty centrifuges because they had designed a protection system that enabled enrichment to continue amidst the breaking centrifuges, Langner said.
“The system might have keep Natanz’s centrifuges spinning, but it also opened them up to a cyberattack that is so far-out, it leads one to wonder whether its creators might have been on drugs,” Langner wrote.
Ingeniously, the malware had the capability of recording 21 seconds of activity from the protection system’s sensors, showing a healthy stream of activity. That 21 seconds was looped over and over on monitoring screens while the attack was executed. Engineers thought they were watching an enrichment process hum along as designed that instead was spinning out of control. The malware attacked industrial controllers built by Siemens, closing crucial valves causing pressure to go up, gases to collect, and centrifuges to figuratively blow up.
The attackers, Langner said, could have let them literally blow up, causing catastrophic destruction. They didn’t, keeping their cover as a result, he said. “The implementation of the attack with its extremely close monitoring of pressures and centrifuge status suggests that the attackers instead took great care to avoid catastrophic damage,” Langner wrote.
Langner’s analysis of the attack called it over-engineered for the task and that any slip-up would have risked detection by the Iranians. Two years after the first Stuxnet was in place, in 2009, the second phase was introduced.
The variant attacked another process control system that controlled rotor speeds in the centrifuges and was a self-replicating worm that moved within the plant’s network and on portable USB drives; the older version, Langner said, was deliberately installed on plant computers, likely by an agent of the attackers.
“This suggests that the attackers had lost the capability to transport the malware to its destination by directly infecting the systems of authorized personnel, or that the centrifuge drive system was installed and configured by other parties to which direct access was not possible,” Langner wrote.
This version of Stuxnet has been well documented, from its use of a number of Windows zero-day exploits and malware signed with stolen Microsoft digital certificates. Langner said this version of Stuxnet was written by hackers, skilled in writing malicious code, while the first attack was coded alongside experts adept in industrial control systems, not IT. Langner points a finger at the National Security Agency as the authors of Stuxnet, calling it the only logical location for its development.
This version and approach to attacking the Iranians’ nuclear capabilities left fingerprints—strange behavior in the industrial processes that could, and would, be detected. And while the attackers could have caused catastrophic destruction at any time, Langner estimates they instead set the country’s nuclear program back by only two years.
“The attackers were in a position where they could have broken the victim’s neck, but they chose continuous periodical choking instead,” Langner wrote. “Stuxnet is a low-yield weapon with the overall intention of reducing the lifetime of Iran’s centrifuges and making the Iranians’ fancy control systems appear beyond their understanding.”
Langner also speculates that Stuxnet was not built to escape beyond Natanz’s walls, yet it did, likely through contractors who worked at the plant leaving with laptops infected with Stuxnet and plugging them in at other industrial facilities where they were contracted. Stuxnet was designed to spread only on local networks, or via portable drives, Langner said.
He also wrote that it was likely the attackers’ intention to allow Stuxnet to spread since the malware reports IP addresses and hostnames of infected systems to a command infrastructure. The attackers could monitor the movement of contractors, likely in the hopes of spotting other nuclear facilities in Iran operating under the radar, he wrote.
The danger too is that future weaponized attacks such as Stuxnet can follow this same path into a facility because, as Langner put it, contractors are good at engineering tasks but lousy at cybersecurity and could be unwitting pawns in deploying another such weapon at any time.
Langner estimates that 50 percent of the investment into Stuxnet was put into hiding the attack; future attacks may not require the same kind of investment, and therefore may not need the resources of a nation-state such as Stuxnet did, Langner wrote.
“And unlike the Stuxnet attackers, these adversaries are also much more likely to go after civilian critical infrastructure. Not only are these systems more accessible, but they’re standardized,” he wrote.
Ultimately, Langner said, Stuxnet may have served two purposes: 1) disrupt the Iranian nuclear program; and 2) allow the attacker to flex its cyberweaponry muscle.
“Operation Olympic Games started as an experiment with an unpredictable outcome. Along the road, one result became clear: Digital weapons work. And different from their analog counterparts, they don’t put military forces in harm’s way, they produce less collateral damage, they can be deployed stealthily, and they are dirt cheap,” Langner wrote. “The contents of this Pandora’s box have implications much beyond Iran; they have made analog warfare look low-tech, brutal, and so 20th century.”
Dennis Fisher talks with several members of the We Are the Cavalry project, including Josh Corman, Robert Hansen, Space Rogue and John Dickson, about the movement’s origins, its goals to promote research on topics such as medical device security and how to help change the perception of security research.http://threatpost.com/files/2013/11/digital_underground_134.mp3
NEW YORK–The term “best practices” is high on the list of overused and nearly meaningless phrases that get thrown around in the security field. It forms the basis for regulations such as HIPAA and PCI DSS and yet if you asked a random sample of 10 security people what the phrase meant, you’d likely get 10 different answers. But what if there aren’t actually any best practices?
“I think there are no best practices, just things that work for you in the right scenario,” Jeremiah Grossman, CTO of WhiteHat Security, said in an interview at the OWASP AppSec USA conference here Thursday. “What’s important is trying to ascertain what those are.”
The process of discovering what works in security has traditionally been one of trial and error. Insert Shiny Defensive Technology A to protect Vulnerable Slot B, then sit back and see what happens. If, or when, it fails, you replace it with a new technology and see whether that works any better. But Grossman said that he’s seen a shift in recent years away from that kind of process and toward a more empirical one.
“It’s metrics-driven. So, suppose you have a Web site that you just put up and it’s full of bugs and when they’re found, they’re fixed fast,” Grossman said. “That tells you that you probably have a QA problem. If you have another site that has just a few bugs but when you try to get them fixed it takes forever or it doesn’t happen at all. That could tell you that your developers need training. Maybe the don’t understand what cross-site scripting, so they need some education on that. It’s about which one works for you in which scenario.”
The movement toward a more numbers-driven approach has helped organizations get a better handle on what’s working in their security programs, Grossman said, and gives them actual evidence to back up their assertions.
“How do things get to be best practices? Because some expert like me or someone else said so,” he said. “I absolutely think things are getting better. Overall, the Web is more secure, measurably more secure. But at the same time, the attackers are getting better and more organized. If you’re a target of opportunity, you just have to be better than average. But if you’re a target of choice, you better be really good at detection and incident response.”
The ESEA League, an online competitive gaming community, has decided to settle with the state of New Jersey after the acting attorney general there alleged that the gaming community operator infected user-machines with malware designed to mine Bitcoins.
The league is owned and managed by E-Sports Entertainment, LLC, and is known for its strict anti-cheating policy, which is supported by a “industry leading anti-cheat client” that users are required to download.
In a blogpost linked to on ESEA’s homepage, the community’s cofounder, Eric Thunberg makes clear that ESEA’s decision to settle is not a concession to – and in fact the company disagrees with – the New Jersey attorney general’s account of the Bitcoin incident.
Bitcoin mining is a process through which Bitcoin users generate “blocks” in order to keep track of and legitimize Bitcoin transactions on the digital crypto-currency’s public ledger, the BlockChain. Generating a new block is tantamount to solving a complicated math problem because each new block must contain within it the record of the previous block, and thus the entire record of every Bitcoin transaction ever. Because the process is resource-intensive, the creators of new blocks are rewarded with new Bitcoins.
The attorney general alleged that an ESEA employee or employees infected thousands of personal computers with malware that enabled E-Sports to monitor what programs subscribers were running and illegally perform Bitcoin mining. ESEA allegedly bundled this malware along with its anti-cheating software package.
More specifically, New Jersey charged that E-Sports created and deployed malicious software that enabled the company to monitor the computers of their users, even when those users were not signed into the ESEA League. ESEA, the state further alleges, also created a botnet operating on the computational resources of its users. The purpose of this Botnet was to pool computer power from the ESEA League’s user-machines in order to mine Bitcoins.
Over a random two-week period of time, the state estimated that E-Sports hijacked more than 14,000 computers, and accrued some $3,500 mining Bitcoins.
“This is an important settlement for New Jersey consumers,” said acting Attorney General John J. Hoffman. “These defendants illegally hijacked thousands of people’s personal computers without their knowledge or consent, and in doing so gained the ability to monitor their activities, mine for virtual currency that had real dollar value, and otherwise invade and damage their computers.
The settlement requires that E-Sports pay the state $325,000 of the suspended $1 million penalty. In addition, the company as agreed to refrain from deploying software code that downloads to consumers’ computers without their knowledge and authorization, commit itself to a 10-year compliance program, and create a dedicated page on its website explaining the specific data it collects, the manner in which it is collected, and how the information is used. If the company fails to adhere to any of this over the next decade, it will be forced to pay the remaining $675,000.
Also named by the state is E-Sports software engineer Sean Hunczak, whom the state claims worked with Thunberg to develop the Bitcoin mining malware that used subscriber’s graphics processing units silently mine Bitcoins.
Thunberg is adamant that he and his company are guilty of nothing.
“The settlement that was signed makes explicitly clear that we do not agree, nor do we admit, to any of the State of New Jersey’s allegations,” Thunberg wrote on the ESEA website. “The press release issued by the Attorney General about our settlement represents a deep misunderstanding of the facts of the case, the nature of our business, and the technology in question.”
Thunberg goes on to write that the employee responsible for the Bitcoin incident “was terminated,” though it is not clear who that employee is, and Threatpost was unable to contact Thunberg for comment.
NEW YORK–A small group of influential security researchers and executives are putting together a grass-roots movement to encourage more research on the emerging breed of connected and potentially vulnerable devices such as pacemakers, insulin pumps and others and help educate users about the security and privacy issues they raise.
The effort is meant to help focus security researchers on the new problem set presented by the rise of the so-called Internet of Things, the emerging network of non-PC devices. These devices, including medical devices, appliances and cars, have largely gone unexamined by security researchers until very recently. Some researchers, such as Charlie Miller and Chris Valasek, have looked at the security issues with the on-board computers in some cars, and there has been some notable research on medical devices, as well. But compared to the volume of work that’s been done on desktop or mobile software, it’s miniscule.
Those in the security community are aware of the potential problems with these devices, of course, but the consumers who use them have little idea of the dangers that an exploitable security bug in something like a pacemaker or car’s computer could present. Josh Corman, director of security intelligence at Akamai, and Nick Percoco, director at KPMG, are trying to change that by imploring security researchers to work on this new set of challenges rather than hammering away at problems that are already well understood.
“We’re facing a different kind of ocean with apex predators. We’re becoming more and more entangled with insecure and indefensible technologies,” Corman said during a talk at the OWASP AppSec USA conference here. “Let’s do security that matters, not just our day jobs. The outside world is part of the solution set. This is security for the public good.”
In some cases, research into security problems with medical devices or cars or other such non-PC devices has been dismissed as stunt hacking because it doesn’t have the immediate effect of finding a bug on iOS or Google Chrome. And Corman and Percoco said they’re well aware that some in the security community will criticize their effort. But that’s beside the point, they said.
“This is about doing research on things that matter rather than on things that frankly don’t matter,” Percoco said. “Today everything is connected, everything is Internet-enabled and the importance of this stuff is growing. If someone with a pacemaker dies, is someone doing forensics on the pacemaker? How are we going to know as a society that these things have flaws?”
The new movement, which is being called We are the Cavalry, got its start at DEF CON this summer, and Corman said it already has attracted a diverse group of researchers, hackers, executives and others with an interest in moving the project forward.
One goal, he said, is to educate the general public about the serious security issues that are likely to arise as more and more devices come online with minimal, if any, security testing.
“The half lives of these things are twenty, thirty, forty years. Even if we just didn’t know better for the last industrial control system software that went out last year, there’s another one going out this year. The question becomes, can we make better risk decisions if we have more information?” Corman said. “Hacking is a new form of power and it’s available to anyone. It’s just to easy to exert your will on other people.”
The Mevade botnet made news when it was found to be using the Tor anonymity network to communicate with its command and control infrastructure. Running C&C on Tor, however, turned out to be a fatal mistake when Tor usage spiked alerting administrators to the unusual activity.
A group of Russian criminals apparently were paying attention to what happened to Mevade and are using a different darknet called I2P, or Invisible Internet Project, as a communication protocol for new financial malware called i2Ninja.
Researchers at Trusteer monitoring a Russian malware forum spotted i2Ninja, which seems to be run-of-the-mill financial malware that includes HTTP injection capabilities, email , FTP and form grabbers. The twist on this one is that it uses I2P to send stolen credentials back to the attackers, and it promotes 24/7 support as a differentiator.
“It offers to whoever buys the malware, in the command and control itself, a direct line of sight with the authors and the support team,” said Etay Maor, fraud prevention solutions manager at Trusteer.” In the control panel, they offer 24/7 support implemented through the I2P protocol. “
Providing support through the command and control panel is new as well; generally support is arranged through an underground forum or support site.
“I2P is similar to Tor in that it’s a darknet, but it’s actually considered more secure by criminals,” Maor said. “This is the first time I’ve seen malware operating over I2P and the first time I’ve seen it offering 24/7 support from the C&C. This means they have a lot of confidence in the security of the protocol.”
It’s unknown whether the support is automated through C&C or whether there is a live person communicating with an attacker. Other malware such as Citadel and the Neosploit Exploit Pack have marketed support; Neosploit even offered tiered support.
“[Support] is super important. I remember when the Zeus source code was leaked and people started developing their own malware, the chats in the different underground forums was about who they were going to get support from,” Maor said. “The people who buy this malware may be looking to make money, but they may not be super technical like they used to be in the past. Six or seven years ago, the people who wrote the code were the people who were operating the malware. It’s not the case today. Today you have a buyer’s market and a seller’s market. When you buy a product today you expect support to come with it and if you have questions, you expect someone to talk to.”
As for the I2P darknet, much like Tor, it’s favored by individuals who prefer or require anonymity online. Individuals in oppressed regions, journalists, activists and even health care and legal professionals who require private, secure communications with clients use services such as Tor to get the job done. These networks, however, also attract criminals such as the Mevade botmasters and the Silk Road gang who also operated over Tor until the FBI took down the underground drug market in early October.
I2P, meanwhile, operates unlike Tor in that it’s a peer-to-peer protocol, and computers on the network communicate via the proxy client between themselves using encrypted messages.
“It’s not like [Tor] where you’re browsing the Internet safely; this is a true darknet, a network you cannot reach,” Maor said. “You cannot Google it. You cannot find it. It’s its own protocol laying on top of HTTP.”
The use of I2P also serves to keep the malware fairly safe from law enforcement and rival gangs, Maor said. Governments interested in surveillance have had limited success, for example, in breaking Tor and watching users’ communication over that network through the spy agency’s FoxAcid program and Quantam servers.
“From what I gather from different forums I participate in, I2P is considered even more secure than Tor,” Maor said. “I2P is still considered a true darknet, something that’s not currently compromised, or no one knows if it’s been compromised. That’s a good enough reason for them to use this.”
NEW YORK–The movement in the security and privacy communities to push the Do Not Track standard as an answer to the problem of pervasive online tracking by ad companies and other entities has resulted in the major browser vendors including DNT as an option for users, giving them a method for telling advertisers and Web sites their preferences on tracking. But DNT may well have outlived its usefulness and needs to be replaced by something that’s more effective and efficient, security experts say.
DNT was conceived as a way for users to communicate their preferences on Web and ad tracking to the sites that they visit. The major browsers, including Internet Explorer, Firefox and Chrome, all have an option that allows users to enable DNT, which essentially sends an HTTP header to sites the users visit telling them whether the users consent to tracking. Advertisers and Web site owners rely on tracking to help them determine user preferences and behaviors and see where users are coming from and going to after leaving their sites. The Federal Trade Commission has pushed DNT as a privacy protecting technology and something that helps consumers defend against unwanted tracking of their online activities.
However, some security experts have begun to question the efficacy of DNT and say that it may be giving users the false impression that they’re completely safe from tracking.
“We need something more substantial that actually works and doesn’t impinge on people’s privacy. This Do Not Track thing is kid of a hot mess,” said Robert Hansen, a senior product manager at WhiteHat Security, in a talk at the OWASP AppSec USA conference here Wednesday. “We believe in opting everyone into security instead of out of it.”
One issue with DNT is that the online ad groups do not support it, and it’s left up to each individual site owner to decide how to deal with the signal from users and whether to honor it. There also are ways around the DNT system, and advertisers and site owners can use other means to track users. Hansen said that users should have a better option for preventing tracking than a voluntary system that many sites and advertisers ignore.
“We’d like to see ‘can not track’ rather than Do Not Track,” he said.
Another problem is that the major browser vendors implement DNT in different ways and have no incentives to actually block the ads that contain the code that tracks users. Microsoft, Mozilla and Google all partner with advertisers, which generates large amounts of revenue for all of them. Google, for example, is expected to earn nearly $40 billion in online ad revenue in 2013.
WhiteHat has released its own browser, Aviator, which is based on Chromium and uses an extension called Disconnect that disables Web site tracking and enables private search. The extension breaks the connections to third parties, preventing them from getting any data from users’ browsers.
DNT at this point appears to be dead, Hansen said, and there is a need for something more effective and useful for consumers.
“All the players came out looking good, because they can say that they supported it,” he said. “I firmly believe it was just a head fake by the online ad industry to buy time.”
A complete bundle of personal information hackers require to steal identities is available on the underground for as little as $25.
The data, known as Fullz in underground parlance, includes name, address, phone number, date of birth, Social Security or EIN numbers, email address with password and possibly bank account or payment card information with credentials. The information has slightly more value if you are from Europe, the United Kingdom, Canada, Australia or Asia, pushing the price up to around $40.
These facts and many more are among the findings of a report aptly titled, “The Underground Hacking Economy is Alive and Well.” Published by Dell, the report, orchestrated by Joe Stewart, director of malware research for SecureWorks’ Counter Threat Unit (CTU), and independent researcher David Shear, investigates the online marketplace for stolen data, paying particular attention to what is being sold, and for what cost.
As if the $40 Fullz price tag isn’t deflating enough, the going rate for the username-password combination for a bank account with between $75,000 and $150,000 is $300 or less, depending on which bank. For the most part, the report did not show a significant rise or fall in prices for stolen data. However, the cost for Fullz and online bank account credentials did drop slightly.
“In 2011, the CTU saw hackers selling US bank account credentials with balances of $7,000 for $300,” wrote Dell’s Elizabeth Clarke on SecureWorks’ website. “Now, we see accounts with balances ranging from $70,000 to $150,000 go for $300 and less, depending on the banking institution where the account is located. In 2011, we also saw hackers selling Fullz for anywhere from $40 to $60, depending on the victim’s country of residence. Fullz are now selling between $25 and only go up to $40, depending on the victim’s location.”
The report also examined the cost for other hacking services such as DDoS attacks, exploit kits, and bundles of malware-infected machines (bots). Hacking into a website for example, would cost you somewhere between $100 and $300, depending on the site and the reputation of the hacker-for-hire. The cost for doxing – gathering the information that constitutes Fullz–is between $25 and $100.
With bots, buying in bulk saves. A bundle of 1,000 zombie machines costs $20, while 5,000 costs $90, 10,000 costs $160, and 15,000 costs $250.
“Infected computers in Asia tend to sell for less,” Clarke wrote. “It is thought that infected computers in the U.S. are probably more valuable than those in Asia, because they have a faster and more reliable Internet connection.”
Exploit kits are expensive. Stewart and Shear discovered an array of remote access Trojans selling for anywhere between $50 and $250, mostly advertised as “fully undetectable” or – coincidentally – FUD, meaning the kit would not be detected by antivirus products. These products could also have additional costs depending on how much work with command and control servers and configuration the buyer was interested in doing.
Buyers could reportedly rent the Sweet Orange Exploit Kit for $450 per week or $1800 per month, which is more expensive than the Blackhole Kit, which went for $700 per three months, $1,000 for six months, and $1,500 per year.
There’s nothing like a little peer pressure to nudge someone toward doing the right thing.
That’s the philosophy behind the Electronic Frontier Foundation’s Encrypt the Web Report, which examines the encryption capabilities of 18 leading Internet companies, including large carriers, social networks, technology companies and Web-based service providers.
“We want to use this as a positive encouragement where if companies see other folks getting good reports, they may want to apply more crypto,” said Kurt Opsahl, a senior staff attorney with the EFF.
These same companies were also surveyed as part of the EFF’s Who Has Your Back report in May. That report evaluated the companies’ efforts around privacy, protection of user data and transparency with regard to government requests for user data.
For the Encrypt the Web report, each company was sent a survey, though not all replied; other sources were also considered including the companies’ websites and news reports. The companies were asked whether they support HTTPS, HSTS, Forward Secrecy, STARTTLS, and whether they encrypt data center links.
The latter query takes on particular importance following the disclosure of the National Security Agency’s MUSCULAR program, which revealed that the spy agency was tapping unencrypted links between data centers in order to siphon data on users’ Internet activities.
“One of the reasons for doing this was to find out about that category,” Opsahl said, adding that the complexity of encrypting those data center links varies between organizations dependent on the size of their operation, how data is transferred and the number of data centers they support.
Of the 18, only Dropbox, Google, Sonic.net, SpiderOak, Twitter and Yahoo said they do encrypt links between data centers. Microsoft was the lone company to concede it did not, while the EFF was unable to determine either way for the remaining companies.
Dropbox, Google, Sonic.net and SpiderOak were the only companies to score a checkmark in all five categories.
“They understand their customers want privacy and security, and are willing to deploy additional measures to ensure crypto is in place against a wide variety of attack vectors,” Opsahl said. “This helps their customers feel more secure about their data.”
Most on the list support HTTPS, although Amazon and Tumblr do so in a limited fashion. Fewer than half support HSTS and even few still support STARTTLS, which the EFF says is especially important for email service providers. STARTTLS encrypts communication between email servers over SMTP; if both providers use the protocol, the message is encrypted, if one does not, it is sent in clear text.
“We have asked for email service providers to implement STARTTLS for email transfer,” the EFF said in a blogpost. “It’s critical to get as many email service providers as possible to implement the system.”
Perhaps of more criticality are the number of large service providers that did not score so well in the report. Amazon, Apple and Tumblr earned one checkmark between them (Apple’s iCloud for its support of HTTPS). Carriers AT&T, Comcast and Verizon earned zero checkmarks between them; AT&T and Verizon have a history of cooperation with the government on surveillance issues, Opsahl said. The EFF and AT&T were embroiled in a lawsuit over the carrier’s alleged cooperation with the NSA’s spying program that was eventually settled when Congress gave AT&T retroactive immunity.
“We’re still concerned by their cooperation,” Opsahl said.
Regardless of encryption deployments, sometimes companies, such as Lavabit, have not been able to overcome government surveillance. Lavabit is alleged to have been Edward Snowden’s secure email provider; rather than turn over its decryption keys to the government, Lavabit shut its doors and went out of business. Silent Circle soon thereafter shuttered its secure email service, Silent Mail, before it too would be compelled to turn over its keys to the government.
In the meantime, the EFF hopes the crypto scorecard will nudge more Internet companies toward deploying encryption across the board.
“For the ‘Who Has Your Back’ report, it has worked well with companies interested in getting a good report. We’ve been able to add stars to several companies over time,” Opsahl said. “The idea is to encourage companies to have a race for the top and be able to show customers they are dedicated to providing quality security.”
The web hosting development site GitHub reset a number of users’ passwords and revoked a slew of user security authorizations this week following a wave of brute-force attacks.
According to a blog entry by GitHub’s Security Manager Shawn Davenport yesterday, the incident involved login attempts from almost 40,000 distinct IP addresses and was a slow, concerted effort to break into user accounts using multiple passwords.
It’s not known exactly how many accounts were compromised but users with weak passwords and even in some cases those with stronger passwords had their passwords reset and all of their tokens, OAuth authorizations and SSH keys revoked. Affected users were sent an email yesterday requesting they create a stronger password, examine their account for “suspicious activity” and urging them to set up two-factor authentication.
Companies such as Apple, Dropbox, Twitter and Evernote have all added two-factor authentication schemes wherein users enter a numerical code along with a username and password to their products over the past year or so to bolster security.
GitHub claims it’s looking into the attack but in the meantime is working on instituting even more acute rate-limiting measures to curb brute force attacks going forward.
“In addition, you will no longer be able to login to GitHub.com with commonly-used weak passwords,” Davenport notes.
Davenport also took the opportunity to remind GitHub users that the site runs a Security History page for each of its users that logs important events. Launched in October, the feature lets users see a list of active sessions with the ability to remotely revoke them.
Attackers are accessing routers running on the border gateway protocol (BGP) and injecting additional hops that redirect large blocks of Internet traffic to locations where it can be monitored and even manipulated before being sent to its intended destination.
Internet intelligence company Renesys has detected close to 1,500 IP address blocks that have been hijacked on more than 60 days this year, a disturbing trend that indicates attackers could finally have an increased interest in weaknesses inherent in core Internet infrastructure.
It is unknown how the attackers are accessing the affected routers, whether they have physical access or whether the router is exposed to the Internet, but that’s the easy part. The route injection is merely a few tweaks to the router’s configuration.
“It’s actually making a BGP-speaking router do exactly what it is intended to do. All you’re doing is changing the configuration on the router,” said Renesys CTO and cofounder Jim Cowie. “A normal border router would have normal configuration entries for all the networks you have access to—all your customers. This just adds extra lines to a configuration. They can announce these routes to my peers and let them know I can reach this even though it’s fiction. As long as you have access to a border router at an important service provider and you’ve chosen the right place to do this, there’s no software [malware] required.”
The hard part is knowing where to insert the route injection attack, Cowie said, adding that some of the victims Renesys has observed—and contacted—include financial services organizations, voice over IP providers, government agencies and other large enterprises. Attacks take place at the level of the BGP route where blocks of IP addresses, in some cases targeting specific organizations, are misdirected.
“On one hand, we’ve seen people hijacking blocks of addresses that belong to DSL pools, groups of customers not very specific somewhere in the country. And we’ve seen networks hijacked that belong to very specific organizations; they’re not a big pool of generic users, but somebody’s business,” Cowie said.
Cowie said the attackers are using the routing system much in the same way a network engineer would.
“There is some sophistication in the choice of place where you inject these routes from,” Cowie said. “You want to be able to evade whatever filters people have in place to prevent the spread of bad routing. And you want to hijack a place that has influential status who are going to propoagate to the people whose traffic you want. Most of sophistication in the attack is in the choice of the point where you actually do route injection.”
The attackers, meanwhile, can pull of this type of redirection and traffic inspection without much in terms of latency to either end of the web request. Also, unlike traditional man-in-the middle attacks where the bad guy is within physical proximity of the victim, here the attacker could just as easily be halfway around the world. And should the traffic in question be unencrypted, plenty of sensitive business or personal data would be at risk.
“[The attacker is] getting one side of conversation only,” Cowie said. “If they were to hijack the addresses belonging to the webserver, you’re seeing users requests—all the pages they want. If they hijack the IP addresses belonging to the desktop, then they’re seeing all the content flowing back from webservers toward those desktops. Hopefully by this point everyone is using encryption.”
Renesys provided two examples of redirection attacks. The first took place every day in February with a new set of victims in the U.S., South Korea, Germany, the Czech Republic, Lithuania, Libya and Iran, being redirected daily to an ISP in Belarus.
“We recorded a significant number of live traces to these hijacked networks while the attack was underway, showing traffic detouring to Belarus before continuing to its originally intended destination,” the company said on its blog. The hop starting in Guadalajara, Mexico and ending in Washington, D.C., included hops through London, Moscow and Minsk before it’s handed off to Belarus, all because of a false route injected at Level3, the ISP formerly known as Global Crossing. The traffic was likely examined and then returned on a “clean path” to its destination—all of this happening in the blink of an eye.
In the second example, a provider in Iceland began announcing routes for 597 IP networks owned by a large U.S. VoIP provider; normally the Icelandic provider Opin Kerfi announces only three IP networks, Renesys said. The company monitored 17 events routing traffic through Iceland.
“We have active measurements that verify that during the period when BGP routes were hijacked in each case, traffic redirection was taking place through Belarusian and Icelandic routers. These facts are not in doubt; they are well-supported by the data,” the blog said. “What’s not known is the exact mechanism, motivation, or actors.”
Since this isn’t a vulnerability that can be patched, mitigations are limited to either cryptographically signing routes, or following a best practice known as BGP 38, where ISPs put filters in place to prevent spoofing and route injection, Cowie said. Both are expensive and may not be economically feasible to ISPs unless all are required to do so. Also, in particular with crypto signing of routes, if the trust is derived from the government or a single organization, they would have control over segments of Internet traffic which could introduce another set of surveillance issues.
“The tempo [of route injection attacks] has picked up over the course of this year, so my guess is this is more common knowledge among groups who can do this,” Cowie said. “It’s hard to say whether it’s one group, or two groups, three groups. Maybe they know each other, we don’t know. It’s really pretty unknowable.”
Graphic courtesy of Renesys.
NEW YORK–If Bill Cheswick had his way, the future of computing and computer security would look a lot like the distant past, with trusted platforms, small programs, applications that can’t affect the operating system and resistance to user mistakes.
Cheswick, a former Bell Labs computer scientist and longtime speaker on security topics, echoed what many people in the security field have been saying for years now: The current way that we’re thinking about and deploying software and security isn’t working well enough and needs to be rethought. This is a familiar refrain for anyone who’s been paying attention to the direction of the security community of late, but Cheswick said that the solution to the current problem set doesn’t involve adding successively thicker layers of security onto existing platforms. Rather, he envisions a reboot of the computing ecosystem itself.
“I think we can build an affordable computing platform that can’t be compromised by user error not involving a screwdriver,” Cheswick said in a keynote talk at the OWASP AppSec USA conference here Wednesday. “You couldn’t compromise the apps, you couldn’t affect the OS, you couldn’t own the machine. It’s not about user education. It’s bad engineering to rely on grandma. There shouldn’t be anything she can do to affect the system.”
The ideal compute platform would include trusted hardware, trusted firmware, a sandbox and a trusted operating system, Cheswick said. The stack he described is not a novel concept. Older platforms, going back several decades, relied on this architecture, he said, and it’s been proven to be reliable and secure. The problem is that the current software and security ecosystems have evolved to a point where implementing something like that would be expensive, at least at the beginning. However, Cheswick believes that it would be worth the start-up costs and effort in order to spread the benefits to the widest possible user base.
Detecting intrusions and compromises of software and devices is the main goal of much of the security software in use today, but Cheswick maintains that model needs some tweaking.
“We’ve already lost once the evil software is on the machine,” he said.
Preventing attackers from getting their mitts on a target machine in the first place should be the goal, he said, and one that Cheswick believes can be achieved through the separation of the core components of the computing platform from the pieces the user needs to touch.
“I want a system where the OS can’t be changed or subverted regardless of the app that’s run or the user’s action. The apps can’t taint the OS or other apps,” he said. “Random Web software can run in a sandbox and it can have arbitrary amounts of evil and it won’t do any harm. And we need ubiquitous end-to-end crypto. I want my kernel to be cast in adamantium before it goes onto the machine. I don’t want it to change once it loads.”
Some of the features that Cheswick described have been implemented in various platforms over the years, most recently in Apple iOS, which will only run signed code and treats the device as a trusted platform. Whether that model becomes a dominant one in the years to come remains to be seen, but Cheswick said he thinks there’s a good chance it could happen.
“I think we can win. Correct software can be implemented if we’re very careful,” he said.
Hackers reportedly breached servers in January belonging to Cupid Media, a niche dating service with 30 million users, stealing more than 42 million unencrypted passwords and various other sensitive data.
Cupid Media operates a variety of niche dating sites based on ethnicity, religion, physical appearance, special interests, lifestyle and more.
Brian Krebs, who first obtained information about the attack earlier this month, suggests that the Australia-based online dating service may have failed to remove information belonging to users who had deleted their accounts. This, Krebs said, is likely how the site ended up exposing the information of more users than are currently registered there.
The Cupid Media compromise, which the company’s managing director, Andrew Bolton confirmed to Krebs, demonstrates two troubling realities: users are still bad at creating passwords and some companies are still failing to encrypt user data, passwords in particular.
According to the report, the hack exposed the names, email addresses, and birthdays of current and former users as well. The stolen information was found on a server which contained information from other recent data breaches, including some of the 2.9 million customer records stolen from Adobe, uncovered by Krebs.
Krebs examined the passwords used on the Cupid Media service, making lists of the top-ten numeric and non-numeric passwords. What he found was not promising:
Graphs via Krebs on Security
Attackers are exploiting a two-year-old vulnerability in JBoss Application Servers that enables a hacker to remotely get a shell on a vulnerable webserver. The number of infections has surged since exploit code called pwn.jsp was publicly disclosed Oct. 4.
Researchers at Imperva said that a number of government and education websites have been compromised, as indicated by data collected through the company’s honeypots. An attacker with remote shell access can inject code into a website run by the server or hunt and peck for files stored on the machine and extract them.
The vulnerability in the HTTP Invoker service that provides RMI/HTTP access to Enterprise Java Beans, was discovered in 2011 and presented at a number of security events that year.
“The vulnerability allows an attacker to abuse the management interface of the JBoss AS in order to deploy additional functionality into the web server,” said Imperva’s Barry Shteiman. “Once the attackers deploy that additional functionality, they gain full control over the exploited JBoss infrastructure, and therefore the site powered by that application server.”
On Sept. 16, the National Vulnerability Database issued an advisory warning of a remote code execution bug affecting HP ProCurve Manager, network management software. The vulnerability was given the NVD’s highest criticality ranking of 10. Since then, other products running the affected JBoss Application Server have been identified, including some security software.
Within three weeks, an exploit was added to exploit-db that successfully gained shell against a product running JBoss 4.0.5.
“Immediately thereafter, we had witnessed a surge in JBoss hacking, which manifested in malicious traffic originating from the infected servers and observed in Imperva’s honeypot array,” Shteiman said.
According to Imperva’s analysis, the vulnerability lies in the Invoker service, which operates at the remote management level enabling applications to access the server. The Invoker improperly exposes the management interface, Shteiman said.
Compounding the problem is that in addition to the pwn.jsp shell, Shteiman said there is another more sophisticated shell available to attackers.
“In these cases, the attackers had used the JspSpy web shell which includes a richer User Interface, enabling the attackers to easily browse through the infected files and databases, connect with a remote command and control server and other modern malware capabilities,” he said.
Imperva also said that the number of webservers running JBoss software has tripled since the initial vulnerability research was made public.
Developers behind the Angler Exploit Kit have apparently added a new exploit over the last week that leverages a known vulnerability in Microsoft’s Silverlight browser framework.
Silverlight, similar to Adobe Flash, is Microsoft’s plug-in for streaming media on browsers and is perhaps most known for being used in Netflix’s streaming video service.
British-based security researcher Chris Wakelin discovered the Silverlight exploit last week and posted about it on Twitter via his @EKWatcher handle. From there an independent security researcher that goes by the name Kafeine picked it up, investigated Angler EK and described his findings on his blog Malware Don’t Need Coffee.
According to Kafeine the exploit kit usually checks to see if the system it’s deployed on has Java or Flash but can now check to see if has Silverlight installed. If it can’t exploit Java or Flash it delivers a remote control exploit (CVE-2013-0074) that targets Silverlight 5. The vulnerability was patched in March but users running Silverlight who haven’t yet patched the critical vulnerability are still at risk and would be best served to update their software.
Angler EK surfaced last month following the arrest of the Blackhole Exploit Kit’s creator Paunch in Russia. According to Kafeine, the same team behind the more souped-up Cool Exploit Kit, who also had ties to Blackhole, helped develop Angler and are also behind the popular Reveton ransomware.
Netflix has 40 million global subscribers that could potentially be vulnerable to the exploit since the service principally uses Silverlight for streaming media. The video streaming company has been making strides to ditch Silverlight for HTML5 over the past few months and while it introduced HTML5-support in Windows 8.1 and Internet Explorer 11 over the summer, the technology hasn’t been completely fleshed out yet on most browsers.