Dennis Fisher talks with Ryan Naraine about the news from the Virus Bulletin 2013 conference, whether the use of zero days is overrated and the collateral damage that can result from cyberwarfare attacks.http://threatpost.com/files/2013/10/digital_underground_128.mp3
A security vulnerability in the web framework Django could make it easier for an attacker to steal a user’s cookie and log into their website even after they’ve logged out.
The session invalidation vulnerability was discovered by G.S. McNamara, the same researcher who dug up a similar vulnerability in the Ruby on Rails web app framework in September.
Like Rails, Django lets users decide where they want to store user session data. While not the default, one of the options is cookie-based storage which McNamara notes, stores all session data in the cookie and signs it.
“The default name for a session cookie is “sessionid” regardless of whether the cookie stores only a session identifier or the complete session data hash,” McNamara explained in a blog on his MaverickBlogging.com website Monday.
McNamara notes that compared to Rails, it’s a little trickier determining which storage session users have implemented, but if a user was using cookie-based sessions and an attacker had access to that machine, even if the user was logged out, they could find, steal or intercept that cookie, and easily gain access to that user’s website.
Django, an open source web application framework that helps users build web apps, runs on Python and was last updated just two weeks ago.
McNamara, who resides in D.C., alerted the Django developers about the vulnerability but it doesn’t sound like a fix is on the horizon.
Instead the group in charge of the framework, the Django Software Foundation, is electing to warn users about the security implications associated with cookie-based sessions.
“Unlike other session backends which keep a server-side record of each session and invalidate it when a user logs out, cookie-based sessions are not invalidated when a user logs out. Thus if an attacker steals a user’s cookie, he or she can use that cookie to login as that user even if the user logs out,” reads a new note on how to use sessions on the Django site.
When reached Wednesday, Carl Meyer, a Django contributor and a member of its core team, acknowledged the group doesn’t plan to make any further changes to the way it handles cookie session storage, adding that “mitigation would require validating the session against server-stored information on every request,” and that at that point the user might as well just use a server-side session instead of a cookie-based session.
According to Django, it’s up to the developer to evaluate the additional risk of cookie session storage “and weigh the pros and cons for their application.”
McNamara still hopes to work with Django when it comes to enhancing the security of their web framework going forward. In an email to Threatpost on Wednesday, he asserted there are still lingering issues with respect to [Django’s] cookie-based session storage.
“I believe this is a risk that was written off without adequate documentation or warning,” McNamara said.
BERLIN–In the last few years, there have been a series of DDoS attacks and intrusions on government networks in South Korea that have resulted in the loss of untold amounts of data. The four attacks haven’t been linked together or attributed to the same attackers, but there are some similarities in the methods and results, a researcher said.
The attackers on South Korean government sites and banks date back to at least July 2009 and run up through an incident in June of this year. Not all of them were destructive, but some employed malware that wiped the master boot record of infected machines and rendered them unusable. Others were massive DDoS attacks directed against DNS servers or individual sites.
In one of the attacks, in March 2011, a malicious dropper was downloaded onto machines through a drive-by download. That dropper had a time bomb inside of it that instructed it to check the date and time and at a predetermined hour, downloads and executes a piece of malware. That component would then overwrite the MBR of the infected machine. There were two different wiper malware samples involved in the attack, said Christy Chung of Fortinet, one for Windows machines and other for Unix machines. In both cases, the MBR was wiped, rendering the machines unusable.
“The two wipers have similar behaviors,” Chung said during a talk at the Virus Bulletin 2013 conference here Thursday. “After the machine reboots, it shows that the operating system can’t be found because the MBR was overwritten.”
The attacks that occurred on June 25, 2013, used a different tactic, targeting two of the name servers used by some of the major South Korean government Web sites. In that case, the malware that infected the PCs used to attack the name servers had components that added registry keys and created services that enabled the malware to survive a reboot and remain on the system, Chung said. The two target DNS servers were hard-coded into the malware and at a pre-determined time the malware launched the DDoS attack on the servers. The effect as devastating.
“Many of the major Korean government sites were unavailable for some time,” Chung said.
Although there were some similarities in the malware used in the attacks, Chung said she’s not convinced that the same attackers were behind all of them.
“I don’t see that,” she said.
BERLIN–Penetration testing has come a long way in the last decade, evolving from a somewhat controversial practice to a de facto best practice in the enterprise market. That evolution hasn’t stopped by any means, and one of the things that experts say must be a part of any comprehensive test now is the use of live, custom malware.
Pen testers often use custom tools that they–or their companies–have built, but the use of live malware isn’t necessarily as common as it should be, said Gunter Ollmann, CTO of IO Active in a talk at the Virus Bulletin 2013 conference here Thursday. The idea behind using freshly made malware is to better reproduce the effect of an actual attacker taking aim at the target network. Pen testing in many ways is about playing the role of a malicious actor, but the tests can be limited in scope and therefore less effective if tools such as live malware aren’t used, Ollmann said.
“Malware accounts for the vast majority of breaches. We need pen-testing methodologies to replicate the current attacker profiles,” he said. “We need to figure out which layers are actually detecting the malware. Did the malware compromise the host? Is it usable?”
Although there are millions upon millions of malware samples available in databases these days, Ollmann said they’re of limited use in a real-world penetration test. Creating new, unique malware and throwing that at a customer’s network is a much more effective and realistic way to test the network’s defenses.
“It’s not worth throwing yesterday’s malware at a target,” Ollmann said. “Off-the-shelf malware is trivial to detect.”
However, it’s not a matter of simply writing a little piece of malware and seeing whether you can slip it by the customer’s security systems. Ollmann said there are a number of important factors to consider in the process, including whether the target network employs proxies, how to handle command control and whether to create multiple versions of a given type of malware.
“I’d say you should create a tree of new malware. You should definitely pre-test it, but only against AV tools that you can prevent from uploading it to cloud services,” he said. “Create markers for each specific job, and choose your C&C carefully. Most enterprises employ proxies, so you likely need to make your malware proxy-aware to get through those defenses.”
In terms of methods for getting your newly created malware onto a target network, it’s no surprise that tried-and-true methods such as social engineering and spear phishing are still the most effective. Ollmann said email is among the more effective ways to get malware into a target network.
“Email with a URL to a download usually works,” he said. “One thing I’ve found to be very successful is going through a company’s recruiting site and then when they request a resume or CV, I send it with the malware attached and voila.”
In between pleas to end the government shutdown that has upwards of 70 percent of the intelligence community furloughed until further notice, NSA director Gen. Keith Alexander and Director of National Intelligence James Clapper spent a significant amount of time before a Senate Judiciary Committee on Wednesday defending the NSA’s surveillance activities and denouncing recent reports that the agency is building dossiers on Americans based on their social networking activities.
Alexander also confirmed that the NSA implemented a pilot program three years ago to collect cell phone call location data, but stressed the program has since been abandoned.
“This may be something that may be a future requirement for the country, but it is not right now, because when we identify a number, we get that to the FBI and they can get probable cause to get location data, that they need,” Alexander said. “And that’s the reason that we stopped in 2011.”
Alexander prefaced his remarks with a prepared statement, stressing that the data was never used: “In 2010 and 2011 N.S.A. received samples in order to test the ability of its systems to handle the data format, but that data was not used for any other purposes and was never available for intelligence analysis purposes.”
The subject of location data was raised last week by Sen. Ron Wyden, D-Oregon, during a Senate Intelligence Committee hearing, yet Alexander did not offer a straight answer, intimating instead that this was classified information.
Alexander sternly denied a New York Times report from the weekend that the NSA is creating graphical connections between individuals based on their social media activities on networks such as Facebook. The report was based on additional documents provided to the media by former NSA contractor Edward Snowden. The Times said a January 2011 memo from the NSA gave the agency the authority to conduct “large-scale graph analysis on very large sets of communications metadata without having to check foreignness” of the data, which included email addresses and phone call metadata.
“Those reports are inaccurate and wrong,” Alexander said. “What they have taken is the fact that we do take (social) data to enrich it. What’s not in front of those statements is the word ‘foreign.’ Information to understand what the foreign nexus is of the problem set we’re looking at.
“They’re flat out wrong saying we are creating dossiers on Americans,” Alexander added, further saying that under Executive Order 12333, the NSA is able to chain together phone and email records to figure out social network activity abroad, powers signed off on by the Secretary of Defense and Attorney General. Alexander also said that if the intelligence gathered during the course of an investigation pointed toward an American, that data would be turned over to the FBI which would pursue the lead after obtaining a court order.
In the meantime, Clapper, 50-year veteran of the intelligence community, decried the government shutdown.
“On top of the Sequestration cuts, the shutdown seriously damages our ability to protect the safety and security of this nation and its citizens,” Clapper said. “This is not just a beltway issue, but this affects our global capability to support the military, diplomats and policy makers.”
Clapper added that American intelligence agents currently on furlough would be attractive targets for recruitment by adversaries, calling it a “dreamland.”
“From my standpoint, the damage will be worse as the shutdown drags on,” Clapper said.
The FBI has taken down the infamous Silk Road underground drug market, arresting Ross William Ulbricht in San Francisco yesterday and charging him not only with the distribution of illegal drugs including heroin and LSD, but also with a number of computer hacking crimes.
Ulbricht, who was known as Dread Pirate Roberts, boasted in a Forbes interview in August that he’d never be caught, but that quickly changed mid-afternoon yesterday when the FBI arrested Ulbricht inside the San Francisco Public Library. Federal prosecutors in New York filed narcotics trafficking, hacking and money laundering charges against Ulbricht, alleging that since January 2011 he ran an online platform where numerous dealers could peddle drugs, in addition to malware such as password stealers, keyloggers and remote access tools, the federal filing said.
The Silk Road websites have been seized by the FBI, along with millions of dollars in Bitcoins, which were the only currency accepted on the website.
Operationally, Silk Road was accessible only through the Tor network, FBI special agent Christopher Tarbell wrote in the complaint filed against Ulbricht. The anonymity provided by the network kept transactions relatively secure; Silk Road had generated $1.2 billion in sales, the court papers said.
Resembling familiar online marketplaces, Silk Road not only offered customers tens of thousands of listings for controlled substances, but advertised hundreds of computer hacking services. Numerous listings offered services for hacking into social media accounts, hacking ATM machines, or spam and phishing lists.
“One listing was for a ‘HUGE Blackmarket Contact List,’ described as a list of ‘connects’ for ‘services’ such as ‘Anonymous Bank Accounts,’ ‘Counterfet Bills (CAD/GBP/EUR/USD),’ ‘Firearms + Ammunition,’ ‘Stolen Info (CC, Paypal),’ and ‘Hitmen (10+ countries),’” Tarbell wrote.
Another 800 listings were also available selling hacked Amazon and Netflix accounts, hacking tools and packaged hacking tools complete with keyloggers, RATs, banking Trojans and other malware, Tarbell wrote.
The Silk Road site also advertised the availability of forged driver’s licenses, passports, Social Security cards, utility bills, credit card statements, car insurance records and other documentation that would enable identity theft. The site also hosted a wiki and community forum where buyers and sellers could communicate, as well as a guidance for conducting transactions on the site and avoiding law enforcement, Tarbell’s filing said.
“In a section of the forum labeled ‘Security – Tor, Bitcoin, cryptography, anonymity, security, etc.,’ there are numerous postings by users offering advice to other users on how they should configure their computers so as to avoid leaving any trace on their systems of their activity on Silk Road,” Tarbell wrote.
Undercover agents, Tarbell wrote, made more than 100 transactions on the site, buying drugs, hacking services and more, from vendors in 10 different countries including the U.S.
The court document also said the FBI located a number of servers hosting Silk Road operations, including one in an unnamed foreign country hosting the Silk Road site. Tarbell wrote that the FBI requested an image of that server on July 23 and as of that date determined that there were more than 950,000 registered users accounts on the server and more than 1.2 million communications sent between Silk Road users on the platform’s private messaging system. Tarbell added that between February 2011 and July 23 there were 1.2 million transactions completed on the site involving almost 147,000 unique buyer accounts and 3,877 unique vendor accounts generating roughly $1.2 billion (9.5 million Bitcoins).
The court document also provides information on Ulbricht’s activity on the Silk Road platform, how he resolved contentious issues with users, threats from competitors, how site administrators were managed and compensated, and details of an alleged murder for hire.
Reuters, meanwhile, reported that arrests were made earlier this year in South Carolina related to Silk Road; Eric Daniel Hughes, a Silk Road customer operating under the pseudonym Casey Jones, was charged with drug possession. The DEA also seized the Bitcoins he used to allegedly purchase drugs on the site.
BERLIN–Just whispering the words “vulnerability disclosure” within earshot of a security researcher or vendor security response team members can put you in fear for your life these days. The debate is so old and worn out that there is virtually nothing new left to say or chew on at this point. However, the question of when to disclose that a given vulnerability is being exploited in the wild is an entirely different one.
Regardless of which sect or splinter cell you belong to in the disclosure debate, for most people it all comes down to finding the most effective way to get a fix published and in the hands of users as quickly as possible. That could mean coordinated disclosure with the vendor or full disclosure on a public mailing list or something in between. But the lines get a little blurry when the discussion veers into the appropriate moment to tell the public that a given vulnerability is being actively exploited. It may seem obvious that users should be told as soon as possible, giving them the best chance at defending themselves or their networks. But there are many other factors in play, mainly the fact that alerting users also will wake up the attacker community.
That’s no small consideration, especially when it concerns a vulnerability in a widely deployed application such as Internet Explorer, Adobe Flash or Java. Researchers from Microsoft and Lancope looked at public exploitation notifications in a handful of major cases from the last few years and found that, as with many things in life, timing is everything.
“Exploitation disclosure is a good thing at any time, but the question is when and can it cause problems?” said Tom Cross of Lancope, who, along with Holly Stewart of Microsoft, gave a talk on the topic at the Virus Bulletin 2013 conference here Wednesday.
One of the cases the pair examined was the Windows Help and Support Center CVE-2010-1885 vulnerability. That bug was disclosed publicly on the Full Disclosure mailing list in June 2010 and the original disclosure included a proof-of-concept exploit. Not long afterward, the exploit was integrated into some attack toolkits and attacks against the vulnerability spiked. In other cases, researchers have gone through the coordinated disclosure process, working with vendors to get a fix ready before announcing the bug, and once the announcement is made, exploitation attempts will immediately increase as attackers pull apart the patch to find the bug behind it.
Not unlike the dreaded disclosure debate, the decision on when to notify users of exploitation attempts depends upon a number of factors. If a vulnerability is particularly severe and there are ongoing, widespread attacks against, the vendor may well choose to notify users even if there’s no patch available. On the other hand, if the attacks are targeted and relatively spotty and the vendor has no workaround ready, it may decide to hold off on notification.
“If there’s nothing you can tell the users to do, there’s not a lot of point in disclosing the exploits,” he said. “It depends on the level of exploitation, the geographic distribution, is a patch available, when will it be if it’s not. If the answer is to tell people not to use a piece of software that’s necessary to do business, the reality is that’s not going to happen.”
It’s also true that the decision is not always solely in the hands of the vendor or even the researcher who discovered the vulnerability. In some cases, a third party security company may notice exploit attempts against a previously unknown vulnerability and take the step of notifying customers.
“There is no one answer,” Cross said.
On Oct. 9, 2003, Microsoft announced its new security patching process that would end up being a catalyst for significant change in the information security community. Ten years ago, the program was announced with a press release that promised
- “Improved patch management processes, policies and technologies to help customers stay up to date and secure.”
- “Global education programs to provide better guidance and tools for securing systems.”
Within the press release, chief executive officer Steve Ballmer said: “Our goal is simple: Get our customers secure and keep them secure. Our commitment is to protect our customers from the growing wave of criminal attacks.”
Those of us working in the security industry or with corporate information security responsibility saw this as a direct response from the famous Trustworthy Computing memo penned by Bill Gates in January 2002. The signs were clear. Microsoft was faced with a serious dilemma. Its software was riddled with security holes that were having a direct negative effect on its customers’ security, availability and privacy. In corporate IT, Microsoft had quickly gotten its own nickname of “necessary evil.” IT managers were forced to use Microsoft software for its business features, but it came at the cost of serious security risks.
Whether you have like or disdain for Microsoft, the new security initiatives started 10 years ago created a great wave of change in our information security industry.
For starters, Microsoft proved to the security community that communication is a key cornerstone to vendor relationships. No one likes to admit they have security problems. Microsoft took the leap of not only admitting it had a problem, but also committed to delivering ongoing communications to its customers and to all computing users. Microsoft started blogging about security issues and also embarked on serious outbound communication campaigns to educate users.
Microsoft showed that communication and relationships are a two-way street. The powerhouse eventually grew to an age where it embraced the same community of people who were responsible for finding and publicly releasing security holes in its software. Today public disclosure of serious Microsoft security holes is now the exception.
Also, resource planning is table stakes in the enterprise IT world. Being a cost center doesn’t help much, but IT has traditionally been underfunded and underappreciated. What is an enterprise IT or security manager supposed to do when their primary software vendor springs on them a critical security patch with do-or-die consequences? Historically, and still the case today, a lot of ongoing projects get dropped to quickly reallocate resources to the moment’s critical security patch. Living in a world of constant interruption is detrimental to morale completion of any planned projects.
With Microsoft’s new consistent patch release timing, enterprise IT could depend on a schedule and allocate resources accordingly. The monthly patching cycle soon became better known as Patch Tuesday. Later in Microsoft’s maturity model, it would introduce the advanced notification service. We know this today as the Thursday before Patch Tuesday, when we receive a high level snippet of what to expect the following week.
Microsoft also proved value with consistency in other ways. For example, Microsoft took the early bold step of defining its security criticality ratings and made the definitions public. Even Microsoft’s security bulletin text format and sections were delivered in a consistent format that security professionals have come to rely upon. Security people like repeatable and dependable systems. Microsoft delivered just that.
Three cheers to Patch Tuesday. It’s the second Tuesday of each month that we both love and hate. Ten years ago, the Patch Tuesday initiatives created profound benefits to all Microsoft consumers by making it easier to keep systems patched and more secure. At the time, the idea seemed so foreign, but has since gained so much following that other vendors such as Cisco, Adobe and Oracle have followed suit. Spend just five minutes today and consider where you’d be today without Microsoft taking the leap 10 years ago.
Andrew Storms is the Director of DevOps for CloudPassage.
BERLIN–In this city, one of the great world capitals, history is never far away. It permeates every aspect of daily life, and the German people are quite proud of much of that history. But there were dark days here too, and not so long ago, when the Stasi, the East German secret police, operated a pervasive surveillance apparatus that kept tabs on millions of Germans as a matter of course. Phone calls, daily movements and business dealings were monitored, ostensibly for the security of the nation. The environment then was quite different, obviously, from the atmosphere in the United States and other democracies today, but the effects of large-scale surveillance, security researcher Andrew Lee argues, remain psychologically devastating and debilitating for most people.
During the height of the Cold War, the surveillance apparatuses in East Germany and other countries were extensive and pervasive, but many people were aware that they were being watched on some level. What they didn’t know was how all-encompassing the data-gathering was and how the information was being used. When those details eventually were revealed, it had a profound effect.
“Everybody knew that they were being watched, but they didn’t know the extent,” said Lee, a researcher at ESET in a talk at the Virus Bulletin 2013 conference here Wednesday. “When they found out, the psychological effect was devastating.”
Drawing a comparison with what’s happening in the U.S., UK and elsewhere in the wake of the leaks regarding NSA surveillance methods and capabilities in recent months, Lee said that governments now have become major adversaries for many organizations and even some individual users.
“Not only are the governments making laws, they’re asking for things like weakening crypto systems and backdoors,” he said. “Why even ask for access to a system when the state of endpoint security in our world today is so woefully inadequate? Why not just break the endpoint? And that’s what’s happened. The government is getting into the malware business. The next big thing will be malware on mobile devices.”
Beyond the effect the NSA leaks have had on the way that the general public perceives the government, there also has been a shift in the security community regarding the way that members share information and interact with one another. The levels of trust among some researchers and companies, built up over the years, have been reduced in some case, Lee said, because researchers aren’t sure who they can trust now and who might be disclosing information to intelligence or law enforcement agencies.
“There has been a chilling of our democracy and created a distrust of companies,” Lee said. “We were having good conversations [in the community] before these leaks happened. Now we’re not talking about this anymore. We’re missing the point.”
One of the issues that’s come up often in discussions in recent months is whether governments are somehow forcing security and antimalware companies not to detect the custom malware and attack tools they’re using in their operations. Lee said he’s never been asked not to detect a government Trojan, and considers that approach a useless one.
“If you want to talk about coerced detection, that’s a really dumb way to do it. It’s not practical,” Lee said. “You do what everyone else does: You write some code and submit it to Virus Total and see who detects it.”
The revelations of the last few months have sparked endless discussions and a lot of vitriol, but Lee, for one, questions how useful all of that has been.
“I question the proportionality of our response to all of this,” he said. “What’s the return on our spend? We spend very little, if anything, educating the public on this.”
BERLIN–The technology industry often is used by politicians, executives and others as an example of how to adapt quickly and shift gears in the face of disruptive changes. But the security community has been doing defense in basically the same way for several decades now, despite the fact that the threat landscape has changed dramatically, as have customer needs. This situation is untenable and must change in order for effective defenses against zero-day vulnerabilities to emerge, experts say.
The use of exploits against zero days, or unpatched vulnerabilities, is nothing new. Attackers have been looking for and using new bugs for as long as there has been software to exploit. What’s changed in recent years is the scale of zero day exploit use and the kind of attackers using them. It used to be mainly individual attackers and some high-end cybercrime groups. But now, zero days are being used by governments, intelligence agencies and state-sponsored attack teams. In the hands of these groups, zero days represent a major threat to the targeted organizations, most of whom can’t keep pace with the patches coming out for known bugs, let alone defend against attacks on zero days.
“There’s no red button you can push to make this go away. This is going to go on and on and on,” Andreas Lindh of I Secure in Sweden said in a talk at Virus Bulletin 2013 here Wednesday. “We need to get our priorities straight. What I’m suggesting is that we get back to basics rather than buying more tools. The tools we have work pretty well when you use them correctly. We actually have really good tools. We need to start focusing on what matters, what really matters.”
Lindh said that the old concept of defense in depth, which has been ridiculed in some corners in recent years, still holds up in most cases if organizations implement their technology correctly and don’t sit back and expect miracles. One key to succeeding more often than not against high-level attackers, he said, is to harden the software we all depend on through the use of technologies such as ASLR and DEP, which prevent many common memory corruption attacks. The number of ways that attackers can get into systems has decreased in the last few years, Lindh said.
“There’s been a reduction in attack vectors they can use,” he said. “There’s not as much room for attacks anymore.”
In many cases, the exploits that are working are not insanely creative bits of work from elite attack teams, but rather copies of exploits produced by legitimate security researchers.
“These people aren’t really writing their own exploits. They’re begging for scraps from security researchers,” Lindh said.
Though much of the attention in the security community and media is focused on zero days and novel attacks, a lot of the damage in the real world is done through the use of exploits against known vulnerabilities. Addressing those holes is an efficient way to increase your winning percentage, Lindh said.
“We need to know that when we’re seeing vulnerabilities in software being exploited, they’re not the ones we’ve identified as being critical. We have to change,” he said. “We have to plug the gaps that are left. We need to do this based on what was learned, and then we need to do it again and again and again. Sooner or later the world’s going to change and we have to change with it. We need to get better at prioritizing what we do. We have to stop feeding users all this BS about APTs all the time. It’s not helping.”
Attackers are continuing to pile on a critical Internet Explorer zero day that remains unpatched two weeks after it was reported.
During the last two weeks, it appears that at least three separate targeted attack campaigns have been using the same bug previously used by Operation Deputy Dog, a campaign that wound up compromising Japanese media outlets and tech systems in the middle of September.
Researchers at FireEye initially discovered the DeputyDog campaign – which leveraged the CVE-2013-3893 vulnerability – a little over a week ago. Now word comes that three other, unconnected campaigns, Taidoor, th3bug and Web2Crew are also using the same exploit.
Web2Crew was spotted on September 25 using the Internet Explorer vulnerability to drop the remote access Trojan PoisonIvy onto machines – some belonging to a financial institution. While the exploit was hosted on a server in Taiwan, an IP address from Hong Kong was used to host its command and control server, an IP address that FireEye associated with Web2Crew during the month of August.
Thanks to the CVE-2013-3892 vulnerability, Taidoor, a type of malware that was seen compromising victims in Taiwan over the summer surfaced on a Taiwanese government website on Sept. 26.
Lastly, FireEye also noticed a campaign by malicious actor th3bug using the vulnerability on Sept. 27. That campaign, much like Web2Crew, unleashed a PoisonIvy payload to those who visited any websites it compromised.
FireEye’s Ned Moran and Nart Villeneuve, who wrote a blog entry about the new campaigns yesterday note that this is a usual occurrence.
“It is not uncommon for APT groups to hand off exploits to others, who are lower on the zero-day food chain – especially after the exploit becomes publicly available,” the two wrote.
While the exploit isn’t publicly available per se, it certainly has become more widespread throughout the cybercrime underground as of late. On Monday Metasploit released an exploit module for the vulnerability, something that will almost assuredly ramp up attacks using the bug.
While Microsoft released a FixIt tool for the bug in September and urged older IE users to download and apply it, some thought the company might still issue an out of band patch to fix the flaw. At this point, with the company’s usual Patch Tuesday release scheduled for next Tuesday, it seems that users will remain vulnerable for at least another week.
It’s been 14 days since Microsoft issued an advisory and temporary mitigation for a zero-day vulnerability in Internet Explorer, one being actively exploited in the wild and called by some experts as severe a browser bug as you can have.
Yet users have since had little more to shield them from these active attacks than a Fix It tool released by Microsoft on Sept. 17. In the meantime, exploits have already taken down a number of Japanese media sites in a watering hole attack targeting government agencies and manufacturers in Japan, and have been implicated in other attacks in Asia going back further than first thought. Microsoft has yet to issue an out-of-band patch for the bug, and with Patch Tuesday a week away, it’s increasingly likely users will continue to be exposed for at least another seven days.
That approach has worked to date because known attacks have been relatively targeted and on a small scale. Yesterday, however, things may have been accelerated with the release of a Metasploit exploit module for CVE-2013-3893. If you’re a believer in HD Moore’s Law, a theory proposed by Josh Corman of Akamai that mirrors Moore’s Law of computing in that casual attacker power grows at the rate of Metasploit, then one could expect an uptick in attacks using this IE bug.
Microsoft did not respond to a request for comment, but last week in response to the attacks against the Japanese media sites, Microsoft said it was continuing to work on developing and testing a security update and urged customers to install the Fix It.
Metasploit engineer Wei Chen wrote in a blogpost that while the exploit currently being seen in the wild targets IE 8 on Windows XP and IE 9 on Windows 7, the vulnerability is found in IE all the way back to IE 6 and the Metasploit module could be tweaked for a broader swath of targets.
According to Chen, the IE8 on XP version of the exploit targets only English, Chinese, Japanese and Korean users, unlike the Windows 7 targets.
“Instead, the exploit would try against any Windows 7 machines (IE8/IE9) as long as Office 2007 or Office 2010 is installed,” he said. “This is because the Microsoft Office Help Data Services Module (hxds.dll) can be loaded in IE, and is required to leverage Return-Oriented Programming in order to bypass DEP and ASLR, and gain arbitrary code execution.”
FireEye said infected computers connect to a command and control server in South Korea over port 443; the callback traffic is unencrypted, despite its use of port 443, FireEye said, adding that a second sample it collected also connected to the same South Korean IP address. FireEye said it also discovered a handful of malicious domains also pointing to the IP in South Korea, which allowed them to make the connection to an attack against security company Bit9 this year. The same email address that registered the South Korean server also registered a domain used in the attack on the security company.
Privat24, the mobile banking application for Ukraine’s largest commercial bank, contains an insufficient validation vulnerability in its iOS, Android, and Windows phone apps that could give an attacker the ability to steal money from user accounts after bypassing its two-factor authentication protection.
The process validation issue arises from a problem in the way PrivatBank has configured the server that handles all of its mobile banking clients. On his website and on the Full Disclosure mailing list, security researcher Eugene Dokukin explains that this vulnerability allowed him to bypass Privat24′s one-time password (OTP) mechanism. However, Dokukin needed to string in a second attack in order to compromise the banking application completely.
Ideally, Private24 should send an OTP to users via standard messaging service each time he or she logs in. However, in reality, the bank is only sending this code to users when they initially install the application on their Android, iOS, or Windows mobile device. Once the application is installed and verified with the initial OTP to a particular device, users can access the application without overcoming that barrier of entry again. For the PrivatBank website on the other hand, the bank sends a new OTP each time a user attempts to log in.
PrivatBank protects its users’ accounts with their mobile number – as a username or account number – and a password. So users would need their password to log in with or without the OTP. Dokukin’s attack therefore is a tricky one. An attacker would need a second attack, perhaps using malware or some sort of phishing scheme, to ascertain a user’s account password before being able to compromise the application and potentially steal money.
Dokukin said he contacted PrivatBank and reported the vulnerability to them. They confirmed the problem to Dokukin but have yet to fix it. The researcher has not yet released all of the technical details explaining how this attack works, but says he intends to do so once PrivatBank updates their applications with a patch fixing the bug.
Threatpost reached out to PrivatBank as well, but the company did not respond to a request for comment at the time of publication.
There’s been no shortage of discussion and debate in recent week about the possibility that the NSA has intentionally weakened some cryptographic algorithms and cipher suites in order to give it an advantage in its intelligence-gathering operations. If you subscribe to the worst-case scenario line of thinking, then most of the commonly used ciphers are compromised. If you’re more optimistic, then you tend to think that maybe the NSA has some private capabilities against encryption protocols and is exploiting them. However, Jon Callas, co-founder of Silent Circle, which announced Monday that it was moving away from potentially compromised ciphers, said that it really doesn’t matter whether the NSA has done this, because the damage has been done.
“This issue that we’re dealing with now is, can we trust any of this?” Callas said in an interview. “It really boils down to, they’ve said they’ve tried to break things, so have they done that or not? If you’re going to look at it from a realistic point of view, it really doesn’t matter whether they did it. It’s as much about the NSA undermining confidence.”
Silent Circle, a provider of secure messaging systems, made the decision to replace AES and SHA-2 in its products with Twofish and Skein, respectively. AES and SHA-2 both were part of competitions sponsored by the National Institute of Standards and Technology and recent revelations have shown that the NSA may have exerted some influence on the NIST standards process in some cases. It’s not known which protocols may have been affected, and that uncertainty is part of what drove Silent Circle’s decision, as well as the debate in the security community about what actions to take, if any.
Callas, a cryptographer and former founder of PGP Corp., said that Silent Circle had been thinking about this move for a few weeks before the announcement and that the technical implementation would not be difficult. For companies such as Silent Circle, whose customers depend on the security and confidentiality of the products, the issue comes down to removing doubt from its customers’ minds. But for the rest of the Internet community, there are other issues to consider as it relates to the security of some of the elliptic curves designed by NIST and the NSA.
“The thing that would be the most likely, and in some ways the scariest, is what if, in good faith, the NSA created these curves in good faith and then the mathematicians there found issues with them they’re weaker than anybody thought,” Callas said. “There are things that we’ve discovered about elliptic curves in the past. If the NSA knew that these curves were weaker that we thought, does it matter?
“The defense we’ve always had in the past is that the crypto the NSA recommended was the same stuff they used to protect top secret data, so we could always say, Well, would they shoot themselves in the foot, too? Now, it seems perfectly plausible to me that if the intel side of the house found something that gave them an advantage over everybody else, they would keep it from the other side of the house. Now we’re really wondering if maybe they would shoot themselves in the foot on purpose.”
The ciphers that Silent Circle is planning to use in its products going forward both were designed independently, something that Callas believes will be important for the company’s customers going forward.
“There have always been people who haven’t trusted the standard things, and even the NIST people have would say, if you don’t trust it, go use these other finalists over here that are intellectual property free,” he said. “That got me thinking. We have to find our way through the legitimate mistrust that starts to resemble a hall of mirrors in a bad 1960s spy movie.”
Image from Flickr photos of HarshLight.
Debian developers alerted Linux users late last week of a new Linux kernel build, linux-2.6, that fixes 11 separate vulnerabilities that could open the kernel to a denial of service attack, information leak or privilege escalation.
Dann Frazier, an administrator with Debian announced the security updates via the company’s listserv late Friday.
The first two Common Vulnerabilities and Exposures identifiers fix information leaks in the kernel that could be exploited via a 64-bit system (CVE-2013-2141) and while it may sound archaic, a CD-ROM driver (CVE-2013-2164). According to Jonathan Salwan, a Paris-based Linux researcher, under certain conditions a local user on a system with a malfunctioning CD-ROM drive could gain access to sensitive kernel memory.
Salwan also discovered an additional vulnerability, in the openvz kernel, that local users could exploit to gain access to sensitive kernel memory.
Kees Cook, a member of the Ubuntu security team, discovered four of the 11 vulnerabilities. Two of those can lead to an attacker crashing the system via DoS (CVE-2013-2888 and 2892) while the other two are somewhat less serious and affect the block subsystem and the b43 network driver. Those vulnerabilities should really only be of concern to those with specially configured systems.
The remaining fixes address a variety of issues, including memory leaks in the implementation of the PF_KEYv2 socket family and the Linux SCTP protocol.
Per usual Debian, which runs one of the more popular Linux distributions today, is encouraging users to upgrade to linux-2.6 and any associated user-mode-linux packages.
Those looking for more information on the vulnerabilities can head to Debian’s security update, DSA-2766-1, from Friday.
Microsoft’s report on compliance with law enforcement requests for data demonstrates a status quo for the software giant from the last reporting period. While the number of requests from law enforcement dropped worldwide in the first six months of 2013, Microsoft complied with 79 percent of requests resulting from court orders, subpoenas and warrants. Only 2.2 percent of those requests resulted in the customer content from services such as Outlook, Hotmail, Xbox Live, or SkyDrive being turned over to the authorities; no Skype requests for user content were made.
Microsoft’s numbers in this report do not take into account national security requests for data; those are handled in a separate report and remain a contentious subject for technology companies. A slew of them, including Microsoft, Google, Facebook, Yahoo and most recently LinkedIn and Dropbox, have petitioned the Foreign Intelligence Surveillance Court (FISC) for permission to publish aggregate numbers on National Security Letter requests for customer data.
In the meantime, Microsoft’s Law Enforcement Requests Report shows that the majority of requests come from authorities in the U.S., United Kingdom, Turkey, Germany and France; Skype-related requests are largely concentrated from the U.S., U.K., France and Germany. Microsoft said that no Skype content data was turned over to law enforcement among the 81 percent of Skype requests complied with.
“This new data shows that across our services only a tiny fraction of accounts, less that 0.01 percent are ever affected by law enforcement requests for customer data,” the report said. “Of the small number that were affected, the overwhelming majority involved the disclosure of non-content data.”
Non-content data, according to the report, includes the user’s name, billing address, IP address history and more. Content data, on the other hand, is defined as the text of an email message, images and files stored in SkyDrive files, calendar information and contact information.
Encompassing all Microsoft services, including Skype, the company received 7,014 law enforcement requests affecting 18,809 accounts; 11 percent of those requests resulted in user content being turned over and 65 percent of non-content data requests were complied with. Of the requests Microsoft did not comply with, either a legal burden was not met, or no customer data was found for the account in question.
As for Skype data alone, 759 requests were made on 1,564 accounts. No content requests were made in relation to Skype, while 790 requests for non-content data were complied with; 80 percent of the requests.
Skype, which was acquired in 2011 by Microsoft, has been a centerpiece of the NSA surveillance scandal since it became public in June. Almost immediately, data leaked by former NSA whistleblower Edward Snowden indicated that not only did the spy agency have pre-encryption access to Outlook and Hotmail data, but it had also collaborated with Microsoft on access to SkyDrive and Skype. According to a report in the Guardian, the NSA boasted of having been able to triple the number of Skype video calls captured in the Prism program.
Microsoft has denied these accusations and along with other massive tech companies has petitioned the FISA court for the ability to enhance its reporting on requests from the government related to national security. To date, companies are allowed to publish NSL request data in bundles of 1,000; Microsoft reported 0-999 for 2012 and between 1,000 and 1,999 the year before.
Smaller companies such as LinkedIn and Dropbox argue that level of reporting decreases transparency and could indicate that those companies could be bigger national security targets for data requests than they are.
The state of embedded device security is poor, and there hasn’t been much in the way of discussion to the contrary. It’s well established that vendors skimp on security, selling for example, routers and other networking gear protected only by default passwords, or other critical devices engineered to be accessible with a simple telnet command. These actions pose an enormous risk to the infrastructure supporting those devices, leaving them open to attack by hackers. Those vulnerabilities can lead to data loss, network performance degradation, or worse put lives in danger if critical services such as water or power are impacted.
For Metasploit creator HD Moore, this was a call to action. Moore has invested serious time into examining data from previous scans of the IPv4 address space looking for equipment exposed by shoddy default configurations and other vulnerabilities. His own Critical.io project, along with the Internet Census 2012, the Carna botnet and a host of academic and research tools that scan the Internet and return bulk data on device exposures has done plenty to shine a harsh light on the risks these Web-facing devices.
But Moore believes there is plenty of room for additional analysis. He’s advanced his work by collaborating with a team of researchers at the University of Michigan on Project Sonar, a repository of scan data that has been responsibly collected by the researcher community. Moore said he hopes to engage the security community into not only analyzing the data produced by scans of public-facing networks, but also contributing data sets. Project Sonar is being hosted by the University of Michigan at scans.io.
“We need more eyes on it because we need the shame to fall on these vendors for the terrible products they’re producing,” Moore said, adding as an example, that he’s found upwards of 10,000 command shells sitting online accessible via telnet that would give an outsider root access to the device in question. “The fact that we’ve got issues like that where there’s not even a pretense of security, yet these devices are not getting any better and in some cases we’re seeing an expansion of the vulnerable devices year over year, that was a call to action to me to make it harder for vendors to avoid the scrutiny they deserve.
“The thing is a lot of people like to see results and like to see the tiny pictures but not many people want to dig into and pull stuff out,” Moore said. “We’re going to try to do that make it palatable for amateur researchers and every day IT admins to use as a resource.”
Currently, there are five data sets hosted by Project Sonar, formally known as the Internet-Wide Scan Data Repository; the two teams used a host of tools to collect the data including ZMap, an Internet scanner developed at UM, UDPBlast, Nmap, and MASSCAN among others. Two datasets were contributed by the University of Michigan and those include scans of HTTPS traffic looking for raw X.509 certificates (43 million have been included from 108 million hosts) as well as data from an IPv4 scan on port 443 conducted last October to measure the impact of Hurricane Sandy. Rapid7 has also contributed three data sets: service fingerprints from Moore’s Critical.IO project; a scan of IPv4 SSL services on port 443; and a regular DNS lookup for all IPv4 PTR records.
“After going through the data enough times, it became obvious there are so many different vulnerabilities and issues that really just take some human eyes on things,” Moore said. “It really doesn’t make sense to sit on this amount of data and not share it.”
Researchers and IT managers can use the data in a variety of ways; in bulk, researchers could generate vulnerability data per vendor or per product, or on a narrower scope, the data can be used to do asset inventory, for example, on a particular IP range in order identify existing vulnerabilities. A Rapid7 team used the data, for example, to accelerate a penetration test on an 80,000-node network. Moore said an entire asset inventory was done in about 20 minutes as opposed to three days with customary tools and scans.
Early feedback has been positive, and Moore said some researchers have already begun to build Web services and queries around the data. Moore added that UM and Rapid7 hope that additional datasets will eventually be contributed, so long as they collection efforts are done legally and within ethical bounds. It’s for that reason, Moore said, that neither UM nor Rapid7 will host data collected from the Internet Census or Carna botnet for this project, the legality of which is still in question.
“Right now we’re steering away from offering any kind of Web service; I don’t want to have a service where folks are depending on me to get them results, nor do I want to be responsible for seeing what queries they run,” Moore said. “It’s not what we’re trying to solve. We’re taking the bulk data that’s multiple gigabytes, 5-6 terabytes, and make that available on the website in bulk form for anyone who’s doing research to download it. At the same time, we’re taking different slices of the data as well and saying ‘Let’s just take the name fields for this packet,’ or parse out a particular field and make those available for folks who are doing more casual testing.”
I had a chance to visit a number of industrial events this year and can see the evolution of cybersecurity in the industrial field. One of these was the 4th National Institute of Standards and Technology’s (NIST) Cybersecurity Framework Workshop (CFW). Kaspersky was in attendance at the previous events, but the main difference with this one, was that now we had sponsors.
The 4th Workshop was another round to gather feedback on the latest version of the cybersecurity framework published on August 28, 2013. My takeaways from this workshop include (well, not too far from the previous 3rd workshop):
- The Cybersecurity Framework is not about “how,” it’s about “what”
- The CFW is more of a marketing push for newbies and a refresher for pros
- There is a huge demand for industrial people to decide on how.
- Whitelisting and Default Deny are a must
Overall, the resulting framework is not specific enough for any of the Government-specified 17 Critical Infrastructure Sectors, to understand the practical steps of implementing a cybersecurity strategy or to at least understand the practical set of instruments (aka security controls).
For those who are not familiar, the Framework consists of five functions, categories for each of those functions, subcategories for each category; and separately, security profiles and maturity tiers.
Functions describe in general, what your cybersecurity should consist of: Identify, Prevent, Detect, Respond and Recover. Most people agree on these functions, while some argue that Improve/Update should be explicitly added in the security domain. As opposed to many other frameworks, security is becoming more obsolete, because while you may be secure today, in 12 months that could no longer be the situation because of new attack methods.
The Categories included in the Framework are comprehensive as well. But, unfortunately, the subcategories (please find the full list in the document itself, see page 14) are a mix between abstract categories which helps to see the domain and potential goals, but leaves the selection of methods to the reader, and technical security controls that many sectors find inapplicable or incomplete. So it’s unsurprising that for the second Workshop in a row we see the same story: whenever any of the workgroup starts speaking about subcategories the work stalls. Most of the participants failed to examine the entire list, besides representatives of different sectors are unsatisfied with the way subcategories are set at all – for their own reasons.
Overall, the subcategories decided upon can be considered quite a failure. For example, the only control related to Industrial Control Systems simply says, “PR.PT-5: Manage risk to specialized systems, including operational technology (e.g., ICS, SCADA, DCS, and PLC) consistent with risk analysis”. It’s very specific, and helpful for OT people, if you know what I mean.
Apparently, it’s rather hard to have “Security Controls” done in a universal way for different sectors – including IT and industrial systems and this doesn’t even take into account smaller sectors. For example, financial sectors normally are believed to take care of data quite well, but the example of Treasury was quite illustrative – all data is public, so confidentiality isn’t a major concern, but transaction and data integrity of shares in peoples’ possession, is a must. This is similar to the situation for industrial controls systems.
My impression is that NIST has decided to leave the work on defining the exact set of subcategories and controls to individual critical infrastructure sectors.
However, this method is not good for certain sectors depending on the industrial network, as there are 9 sectors where industrial systems prevail, but regulators and industry associations are different – DoE, DoT. So it is unclear whether each sector has to do the “instantiation” of the framework on their own, and whether or not this should be repeated nine times with different results, as they share much of commonalities due to their reliance on Industrial Control Systems.
Also, NIST will leave the Framework implementation details to each sector. One of the questions that’s wasn’t answered at the workshop was, “How do you implement security along this framework, or at least, what will you start with?”
One option is to remove subcategories from the framework, to make it consistent, and to try not to present universal security controls, but rather make the Categories a goal-setting framework.
The Framework also includes another dimension – Profiles (what does your organization need among the variety of categories and controls – what are your security priorities, based on the business specifics), and Tiers (how mature are you in cybersecurity). While it seems to be common sense, all of the frameworks in different domains basically share the same approach on “Flexibility” and “Maturity”. However in practice, in CFW it’s rather a mess because it is unclear how to measure what Tier you have and in turn what that tier stands for.
So what’s the good news?
- NIST adopted Kaspersky Lab’s whitelisting (Default Deny) approach for security for Critical Infrastructures – namely, “PR.PT-3: Implement and maintain technology that enforces policies to employ a deny-all, permit-by-exception policy to allow the execution of authorized software programs on organizational systems (aka whitelisting of applications and network traffic)”. We believe that this totally makes sense, and we are happy to know that our voice has been heard and our vision shared.
- A major goal and major impact of the Cyber Security Framework is marketing – pushing all Critical Infrastructures, including many of those who do not yet have any cyber security programs, to start doing something, and providing more of a budget to CISOs of those who have a clear vision already. Many people suggested putting a framework in a marketing brochure to make it clear.
- The third positive as a result of this workshop, is that once pushed widely, the Framework can help people from different companies. Helping companies better understand each other in the cybersecurity domain is important, as most critical infrastructures are interconnected and outsourcing to each other, which can bring a serious domino effect in a potential cyber security incident. Cybersecurity marketing efforts could be helpful in many countries for the sake of cybersecurity of Critical Infrastructures.
- Fourth, sector-specific jobs on specifying the security controls and mapping the Framework to existing sector and industry standards will be done, though that person or group has not been identified. The cyberframework could become a cross-reference between different sector’s standards and frameworks, which will also help build better understanding between entities on the technical level.
While the Cybersecurity Framework may serve as the first step in pushing Critical Infrastructure security, the only way to actually increase protection is to make sure it goes with step two (where do I start?) and step three (what are the best practices to follow?) for each of the Critical Infrastructure sectors. Among these sectors, there are 10 industrial-centric ones that are less-experienced in IT security overall and have a different nature to their processes (high-availability instead of high-confidentiality).
So, the question still remains –how can we make industrial security more practical for the current threat landscape?
Kaspersky Lab is actively exploring possible options with our industrial partners.