Threatpost for B2B
A new piece of ransomware that emerged earlier this month is encrypting its victim’s files with an easily breakable cryptographic algorithm. BitCrypt, as it is known, purports to lock down files with 1024-bit RSA encryption but actually only deploys a much weaker 426-bit key.
According to researchers Cedric Pernet and Fabien Perigaud, the makers of BitCrypt may have accidentally deployed this much weaker encryption algorithm that is incredibly easy to break. So easy in fact, the researchers say they can break BitCrypt’s encryption using a standard computer in a matter of hours. Pernet and Perigauld are a pair of researchers working for Cassidian, the security division of the European Aerospace Defence and Space group.
The researchers first came across BitCrypt after it showed up and encrypted everything on a computer belonging to one of their friends. Research revealed that the domain ‘bitcrypt[dot]info’ was registered on February 3. Presumably, the victims of BitCrypt are directed toward this website, where they are told they must set up a Bitcoin purse and pay 0.4 Bitcoins into the Bitcoin wallet of the person or people responsible for BitCrypt. Once they have done that, there is a set of fields on the website where victims can enter their Bitcoin wallet ID number and their email address. Once the criminals see that they have received a payment from the infected users’ wallet, they can then send off the appropriate encryption key, so that user can then decrypt their files.
Pernet and Perigauld managed to find and analyze a VirusTotal sample of BitCrypt that had been submitted on February 9. BitCrypt claimed to use RSA 1024-bit cryptography.
The researchers then did a bit of reverse engineering and at first glance everything seemed legitimate. While it sought out files to encrypt, the malware ran a watching thread that monitored user activity and blocked any attempt to run taskmgr.exe or regedit.exe. The malware was encrypting any files with the following extensions:
.dbf, .mdb, .mde, .xls, .xlw, .docx, .doc, .cer, .key, .rtf, .xlsm, .xlsx, .txt, .xlc, .docm, .xlk, .text, .ppt, .djvu, .pdf, .lzo, .djv, .cdx, .cdt, .cdr, .bpg, .xfm, .dfm, .pas, .dpk, .dpr, .frm, .vbp, .php, .js, .wri, .css, .asm, .jpg, .jpeg, .dbx, .dbt, .odc, .sql, .abw, .pab, .vsd, .xsf, .xsn, .pps, .lzh, .pgp, .arj, .gz, .pst, and .xl
However, upon decoding one of BitCrypt’s configuration files, it became very apparent that BitCrypt’s writers had failed to deploy the encryption correctly.
“The [decoded] number has 128 digits,” the pair wrote in a blog post, “which could indicate a (big) mistake from the malware author, who wanted to generate a 128 bytes key.”
As it turned out, BitCrypt was deploying RSA-426 encryption rather than 1024. The researchers managed to break that cryptography in 43 hours on a quad-core PC and just 14 hours on 24-core server.
In general, ransomware is a type of malware that encrypts various seemingly important files on the machines of its victims. These scams then asks their victims to make some payment in exchange for the encryption key that would decrypt those files. There is never any guarantee that paying the ransom will decrypt anything.
In September 2013, a particularly potent piece of ransomware called CryptoLocker emerged. While Ransomware is nothing new, CryptoLocker garnered enough attention to become one of those special pieces of malware that gets press attention outside the security industry. CryptoLocker’s efficacy spurred a bit of a surge in new ransomware samples.
For months, weak cryptography has been a hot topic in the security world because of revelations suggesting the the U.S. National Security Agency had allegedly found ways of subverting popular cryptographic algorithms deployed by the big Internet firms to spy on those companies’ users en masse without warrant. This report from Pernet and Perigaud flips that narrative a bit, demonstrating not even cybercriminals are immune from making mistakes with cryptography.
If you’d like to read up on exactly where BitCrypt’s author’s slipped up, you can find Pernet and Perigaud’s technical analysis here.
Dennis Fisher talks with Bruce Schneier about the differences between bulk and targeted surveillance, the most concerning NSA revelations and making surveillance more expensive for intelligence agencies.http://threatpost.com/files/2014/02/digital_underground_145.mp3
SAN FRANCISCO — Researchers at Bromium Labs are expected to announce today they have developed an exploit that bypasses all of the mitigations in Microsoft’s Enhanced Mitigation Experience Toolkit (EMET). Principal security researcher Jared DeMott is scheduled to deliver a presentation this morning at the Security BSides conference explaining how the company’s researchers were able to bypass all of the memory protections offered within the free Windows toolkit.
The work is significant given that Microsoft has been quick to urge customers to install and run EMET as a temporary mitigation against zero-day exploits targeting memory vulnerabilities in Windows or Internet Explorer.
EMET is not meant to be permanent fix, instead it is supposed to terminate or block actions by malware or exploits threatening previously unreported vulnerabilities until a patch is available.
Microsoft is expected to release the latest version of EMET this week during the RSA Conference; Rahul Kashyap, chief security architect at Bromium, said the company has been working closely with Microsoft and expects the vulnerability to be addressed in the new EMET release.
EMET comes with a dozen different mitigations starting with Data Execution Prevention and Address Space Layout Randomization, two key memory protections in Windows, as well as a handful of mitigations against return-oriented programming (ROP), heap spray and SEHOP mitigations, and more.
Kashyap said Bromium’s bypass bypasses all of EMET’s mitigations, unlike previous bypasses that were able to beat only certain aspects of the tool.
“We analyzed all of the protections, and took an IE exploit and then we kept on tweaking the exploit payload until we were able to bypass all the mitigations available in EMET,” Kashyap said. “Everything is bypassed in its latest version.”
Kashyap said EMET has raised the bar significantly for exploit writers trying to beat Windows’ protections. Malware writers, such as those behind Operation SnowMan targeting the latest IE zero-day, have taken to adding to modules that scan computers for EMET libraries and will not execute if EMET is installed.
“EMET, like any other tool, needs to know exploitation vectors to be able to block them. We tried to attack that very core, fundamental architectural drawback that most tools today have, which is you need to be detect an exploit in order to protect,” Kashyap said. “In this case, we studied the mitigations available in EMET and then we tweaked a payload to create a new vector variant which could bypass the existing mitigations.”
In a paper released today, DeMott explained that the researchers intended initially to target just the five ROP protections in EMET with a real-world browser exploit. The project grew to include all relevant protections including stack pivot protection, shellcode complete with an EAF bypass and more, DeMott wrote.
“The impact of this study shows that technologies that operate on the same plane of execution as potentially malicious code, offer little lasting protection,” DeMott wrote. “This is true of EMET and other similar userland protections.”
Bromium said its research focused on 32-bit Windows 7 systems running EMET 4.0 and 4.1 (ROP protection is not implemented for 64-bit processes, the paper said.). ROP is an exploitation technique that evolved from ret2lib3, which enables an attacker to inject and execute code by re-using code that already exists. The ROP technique changes executable permissions in memory space, DeMott explained in the paper, in order to execute the attacker’s code located elsewhere. An attacker must chain together a series of processes in order for ROP to succeed.
EMET has been bypassed numerous times before. Researcher Aaron Portnoy, cofounder of Exodus Intelligence, presented a paper during last year’s SummerCon that explained a number of EMET bypasses. Two years ago, a researcher in Iran named Shahriyar Jalayeri reported two bypasses of EMET’s five ROP protections.
You can expect researchers to continue to try to poke holes in EMET. The upcoming Pwn2Own contest at the CanSecWest Conference is offering a $150,000 grand prize to anyone able to bypass EMET running on Windows 8.1 and Internet Explorer 11.
The certificate-validation vulnerability that Apple patched in iOS yesterday also affected Mac OS X up to 10.9.1, the current version. Several security researchers analyzed the patch and looked at the code in question in OS X and found that the same error exists there as in iOS.
Researcher Adam Langley did an analysis of the vulnerable code in OS X and said that the issue lies in the way that the code handles a pair of failures in a row. The bug affects the signature verification process in such a way that a server could send a valid certificate chain to the client and not have to sign the handshake at all, Langley found.
“This signature verification is checking the signature in a ServerKeyExchange message. This is used in DHE and ECDHE ciphersuites to communicate the ephemeral key for the connection. The server is saying ‘here’s the ephemeral key and here’s a signature, from my certificate, so you know that it’s from me’,” Langley wrote in his analysis. “Now, if the link between the ephemeral key and the certificate chain is broken, then everything falls apart. It’s possible to send a correct certificate chain to the client, but sign the handshake with the wrong private key, or not sign it at all! There’s no proof that the server possesses the private key matching the public key in its certificate.”
Some users are reporting that Apple is rolling out a patch for his vulnerability in OS X, but it has not shown up for all users as yet. Langley has published a test site that will show OS X users whether their machines are vulnerable.
He points out that because of the nature of the bug, certificate pinning likely would not have had any effect on this vulnerability. Certificate pinning allows clients such as browsers to specify the exact certificate that they associate with a given site, helping to prevent man-in-the-middle attacks. But in this case, there’s no problem with the certificate itself.
“Because the certificate chain is correct and it’s the link from the handshake to that chain which is broken, I don’t believe any sort of certificate pinning would have stopped this. Also, this doesn’t only affect sites using DHE or ECDHE ciphersuites – the attacker gets to choose the ciphersuite in this case and will choose the one that works for them,” Langley said.
Researchers at CrowdStrike also looked at the code, and said that likely attack scenarios could include interception of sessions with webmail services, or any other SSL-protected site, for that matter.
“Due to a flaw in authentication logic on iOS and OS X platforms, an attacker can bypass SSL/TLS verification routines upon the initial connection handshake. This enables an adversary to masquerade as coming from a trusted remote endpoint, such as your favorite webmail provider and perform full interception of encrypted traffic between you and the destination server, as well as give them a capability to modify the data in flight (such as deliver exploits to take control of your system),” their analysis says.
The CrowdStrike researchers said that finding non-encrypted packet data in the SSL/TLS handshake could be an indication of exploit attempts against this vulnerability.
Apple on Friday quietly pushed out a security update to iOS that restores some certificate-validation checks that had apparently been missing from the operating system for an unspecified amount of time.
Apple released iOS 7.06 on Friday and the only content in the update was a small security fix that the company said addressed a problem with the way that iOS handled certificate validation when establishing a secure connection.
“Secure Transport failed to validate the authenticity of the connection. This issue was addressed by restoring missing validation steps,” the Apple advisory says.
The wording of the description is interesting, as it suggests that the proper certificate-validation checks were in place at some point in iOS but were later removed somehow. The effect of an exploit against this vulnerability would be for an attacker with a man-in-the-middle position on the victim’s network would be able to read supposedly secure communications. It’s not clear when the vulnerability was introduced, but the CVE entry for the bug was reserved on Jan. 8.
“An attacker with a privileged network position may capture or modify data in sessions protected by SSL/TLS,” Apple said.
Certificate validation is a key step in establishing secure sessions, as attackers often employ techniques that involve spoofing certificates for high-value sites such as Google or Yahoo in the hopes of capturing users’ confidential data, such as user IDs and passwords. If the client doesn’t check to ensure that the certificate presented is in fact valid and issued for the proper site, the security of the connection can’t be trusted.
On the point of transparency, the company believes it should be allowed to report the exact number of government data requests it receives, the number of accounts affected by those requests, and the laws used by the government to justify such requests. Presently, the company’s transparency report publicizes the number of law enforcement requests it receives and the number of accounts affected by those requests. However, Dropbox – like other tech firms – is limited in its ability to report information about the number of national security letters (NSLs) it receives
In it’s most recent transparency report, the company said it received somewhere between 0 and 250 NSLs. Under their new data request principles, the company says it is continuing to fight for it’s right to be more explicit about the number of NSLs it receives, carefully noting that it may not receive any such letters at all.
Dropbox also shares the widely-held belief that data requests should be limited to specific people involved in targeted investigations. Therefore, the company says it will resist any requests attempting to gather information from large groups of users unrelated to a specific investigation.
“The US government has been seeking phone records from telecommunications companies related to large groups of users without suspicion that those users have been involved in illegal activity,” the company says. “We don’t think this is legal and will resist requests that seek information related to large groups of users or that don’t relate to specific investigations.”
Much of the conversation revolving around the NSA spying revelations has focused on U.S. citizens. If you listen to NSA director Keith Alexander or any other defenders of PRISM and similar programs, they are pretty open about the fact that they have the right to indiscriminately collect information of non-U.S. citizens. Dropbox now stands out in that it says it aims to protect the data of its users, regardless of citizenship.
Beyond that, Dropbox promises its customers that it will do everything in its power to guarantee that the government can not access user information through backdoors, by exploiting security vulnerabilities, or through any means other than established legal process.
Dennis Fisher and Mike Mimoso preview next week’s RSA conference, discuss the sessions they’re looking forward to covering and what the fallout from the NSA controversy will be during the week.http://threatpost.com/files/2014/02/digital_underground_144.mp3
The Facebook acquisition of mobile messaging service WhatsApp has captivated the tech world this week. Much of that has to do with the massive $19 billion price tag and, to a lesser extent, the incredibly fast rise of the company. But while analysts and customers have been examining the deal, some security researchers decided to look at the security of WhatsApp itself.
WhatsApp is a text and multimedia messaging service that uses the Internet, rather than a cellular data network, as its base. The app grew slowly at first but exploded in the last couple of years and today claims 450 million active users. Security researchers at Praetorian, who have been running a project known as Project Neptune to assess the security of mobile apps, did a limited assessment of the iOS and Android versions of WhatsApp and discovered a number of issues around the way the app uses SSL.
The most serious problem they found was that WhatsApp does not enforce certificate pinning. The use of certificate pinning allows apps to specify a specific certificate that they trust for a given server. This helps defeat a number of attacks, specifically man-in-the-middle attacks that rely on spoofing the certificate for a trusted site. Many of the major Web browsers support certificate pinning now, but its adoption in the mobile world has been somewhat slower. Praetorian found that WhatsApp doesn’t enforce SSL pinning, potentially opening users up to MITM attacks.
“Within minutes, Project Neptune picked up on several SSL-related security issues affecting the confidentiality of WhatsApp user data that passes in transit to back-end servers. This is the kind of stuff the NSA would love. It basically allows them—or an attacker—to man-in-the-middle the connection and then downgrade the encryption so they can break it and sniff the traffic. These security issues put WhatsApp user information and communications at risk,” Paul Jauregui of Praetorian wrote in an explanation of their test.
“WhatsApp does not perform SSL pinning when establishing a trusted connection between the mobile applications and back-end web services. Without SSL pinning enforced, an attacker could man-in-the-middle the connection between the mobile applications and back-end web services. This would allow the attacker to sniff user credentials, session identifiers, or other sensitive information.”
Jauregui said in an email interview that it is unfortunately quite common to find mobile apps that don’t perform certificate pinning.
“Surprisingly, it’s extremely common to see mobile apps without certificate pinning. This security control is used to counter the ability of an attacker to view and modify all traffic passing between the mobile device and backend server. It can also help protect against certificate authority trust failures during client and server negotiation, which coupled with the support of weak and null (plain text) ciphers—as found to be the case in WhatsApp—is an even bigger red flag,” he said.
The researchers also found a few other less-serious issues, including support for null ciphers, meaning that some data isn’t encrypted at all.
“With Null Ciphers supported, if the client mobile application attempts to communicate to the server using SSL and both parties do not support any common cipher suites—as a result of a malicious intercept—then it would fall back to sending the data in clear, plain text. Supporting Null Ciphers is not something we come across often—it’s quite rare,” Jauregui said.
Mobile app security has lagged behind the security of desktop and Web apps in many respects, as developers have moved to the new platforms and run into many of the same security issues that they encountered years before on the Web. This isn’t the first time that researchers have discovered security problems with WhatsApp. Several years ago it was reported that the app sent data in plaintext, and other researchers found that they could use an API to hijack any user’s account.
Fixing the certificate pinning issue can be done in a variety of ways, and Jauregui said it all depends on what the developers want to do.
“Level of effort can vary depending on how developers choose to implement certificate pinning. Pinning the certificate itself is the simpler way to do it, but it requires more maintenance overtime because developers will have to make changes to the application whenever the cert changes. Another way to do it is by pinning the public key, which can be more difficult. Choosing the best way to go often depends on the frequency in which the certificate itself may change,” he said.
Developers with the popular dating application Tinder have fixed a vulnerability that up until last year could’ve allowed users to track other users, thanks to a hole in the app’s API and some old fashioned trigonometry.
Max Veytsman, a Toronto-based researcher with Include Security disclosed the vulnerability Wednesday on the firm’s blog, claiming that before it was fixed he could find the exact location of any Tinder user with a fairly high level of accuracy, up to 100 feet.
Tinder, available on iOS and Android, has been massively popular over the last year. It routinely appears in Apple’s list of most downloaded apps and apparently has been all the rage at this winter’s Olympic games in Sochi, Russia, with reports that many athletes are using it to kill downtime.
The app is a location-aware dating platform that allows users to swipe through images of nearby strangers. Users can either “like” or “nope” images. If two users “like” each another, they can message each other. Location is critical for the app to function — beneath each image Tinder tells users how many miles away they are from potential matches.
Include Security’s vulnerability is tangentially related to a problem in the app from last year wherein anyone, given a little work, could mine the exact latitude and longitude of users.
That hole surfaced in July and according to Veytsman, at the time “anyone with rudimentary programming skills could query the Tinder API directly and pull down the coordinates of any user.”
While Tinder fixed that vulnerability last year, the way they fixed it left the door open for the vulnerability that Veytsman would go on to find and report to the company in October.
Veytsman found the vulnerability by doing something he usually does in his spare time, analyze popular apps to see what he finds. He was able to proxy iPhone requests to analyze the app’s API and while he didn’t find any exact GPS coordinates – Tinder removed those – he did find some useful information.
It turns out before it fixed the problem, Tinder was being very exact when it communicated with its servers just how many miles apart users are from one another user. One part of the app’s API, the “Distance_mi” function tells the app almost exactly (up to 15 decimal points) how many miles a user is from another user. Veytsman was able to take this data and triangulate it to determine a user’s most recent locations.
Veytsman simply created a profile on the app, used the API to tell it he was at a random location and from there, was able to query the distance to any user.
“When I know the city my target lives in, I create three fake accounts on Tinder. I then tell the Tinder API that I am at three locations around where I guess my target is.”
To make it even easier, Veytsman even created a web app to exploit the vulnerability. For privacy sake, he never released the app, dubbed TinderFinder, but claims in the blog he could find users by either sniffing a users’ phone traffic or inputting their user ID directly.
While Tinder’s CEO Sean Rad said in a statement yesterday that the company fixed the problem “shortly after being contacted” by Include Security, the exact timeline behind the fix remains a little hazy.
Veytsman says the group never got a response from the company aside from a quick message acknowledging the issue and asking for more time to implement a fix.
Rad claims Tinder didn’t respond to further inquiries as it does not typically share specific “enhancements taken” and that “users’ privacy and security continue to be our highest priority.”
Veytsman just assumed the app was fixed at the beginning of this year after Include Security researchers looked at the app’s server side traffic to see if they could find any “high precision data” leakage but discovered that none was being returned, suggesting the problem was fixed.
Since the researchers never got an official response from Tinder that it had been patched and since the issue was no longer “reproducible,” the group decided it was the right time to post their findings.
Attackers breached a University of Maryland database containing more than 300,000 student, faculty, staff, and other affiliated records on Tuesday, according to an apology issued by the university’s president, Wallace D. Loh.
While it is not clear exactly how many individuals are affected by the breach, the compromised database contained the records of every person issued a university identification at both the College Park and Shady Grove campuses since 1998. In total, the database stored 309,079 records.
The breach exposed Social Security numbers, names, dates-of-birth, and university identification numbers. As is a common motif among data breach notifications, this one also announced some information that was not exposed by the breach. Namely, no phone numbers or addresses or payment, academic, or health information was compromised.
The breach is currently under investigation, and the school – which is claiming it “was the victim of a sophisticated computer security attack” – is not commenting on the technical details of the intrusion.
“Computer forensic investigators are examining the breached files and logs to determine how our sophisticated, multilayered security defenses were bypassed,” Loh said in a statement. “Further, we are initiating steps to ensure there is no repeat of this breach.”
The university is cautioning students and others who may have been affected by the breach to use caution when exchanging personal information online. The university says it will not contact anyone via email and ask them provide personal information regarding the incident. Should anyone be contacted over they phone, they are advised to ask for a call-back number so they can verify they identity of the person attempting to contact them.
“Universities are a focus in today’s global assaults on IT systems. We recently doubled the number of our IT security engineers and analysts. We also doubled our investment in top-end security tools. Obviously, we need to do more and better, and we will.”
The University is offering one year of free credit monitoring services to anyone affected by the breach.
Calls to the University were not returned by the time of publication.
Google Chrome 33 is out, and the new version of the browser includes fixes for 28 security vulnerabilities, including a number of high-severity bugs. The company paid out more than $13,000 in rewards to researchers who reported vulnerabilities that were fixed in this release.
One of the high-priority vulnerabilities Google patched in Chrome 33 is an issue with the sandbox in Window. The company also patched a use-after-free vulnerability in the layout of Chrome. Here’s the full list of the bugs discovered by external security researchers fixed in Chrome 33:
[$2000] High CVE-2013-6652: Issue with relative paths in Windows sandbox named pipe policy. Credit to tyranid.
[$1000] High CVE-2013-6653: Use-after-free related to web contents. Credit to Khalil Zhani.
[$3000] High CVE-2013-6654: Bad cast in SVG. Credit to TheShow3511.
[$3000] High CVE-2013-6655: Use-after-free in layout. Credit to cloudfuzzer.
[$500] High CVE-2013-6656: Information leak in XSS auditor. Credit to NeexEmil.
[$1000] Medium CVE-2013-6657: Information leak in XSS auditor. Credit to NeexEmil.
[$2000] Medium CVE-2013-6658: Use-after-free in layout. Credit to cloudfuzzer.
[$1000] Medium CVE-2013-6659: Issue with certificates validation in TLS handshake. Credit to Antoine Delignat-Lavaud and Karthikeyan Bhargavan from Prosecco, Inria Paris.
 Low CVE-2013-6660: Information leak in drag and drop. Credit to bishopjeffreys.
In addition to these vulnerabilities, Google also fixed more than a dozen bugs that were discovered by the company’s internal security team. That group of bugs includes 15 high-severity flaws and two medium-level vulnerabilities.
Adobe rushed out an unscheduled Flash Player update today to counter exploits of a zero-day vulnerability in the software.
A number of national security, foreign policy and public policy websites are hosting exploits that redirect to espionage malware, including the Peter G. Peterson Institute for International Economics, the American Research Center in Egypt and the Smith Richardson Foundation.
Those three nonprofit sites, researchers at FireEye said, are redirecting visitors to an exploit server hosting variants of the PlugX remote access Trojan. FireEye calls the campaign Operation GreedyWonk.
“This threat actor clearly seeks out and compromises websites of organizations related to international security policy, defense topics, and other non-profit sociocultural issues,” FireEye wrote in an advisory today. “The actor either maintains persistence on these sites for extended periods of time or is able to re-compromise them periodically.”
The hackers behind this campaign have resources that include access to Flash and Java zero-day exploits, FireEye said. They are targeting visitors who use these websites as a resource and those visitors are likely government or embassy employees who are at risk for data loss.
Adobe’s update today is for Flash Player 184.108.40.206 and earlier for Windows and Macintosh, and Flash 220.127.116.116 for Linux. CVE-2014-0502 has been assigned to this vulnerability. FireEye said that the exploit targets Windows XP users, as well as Windows 7 users running an unsupported version of Java (1.6) or out of date versions of Microsoft Office 2007 or 2010. The vulnerability enables someone to remotely overwrite the vftable pointer of a Flash object to redirect code execution.
The exploit is using the Adobe Flash vulnerability to bypass ASLR and DEP protections native to Windows. It does so by building or using hard-coded return-oriented programming chains in XP and Windows 7 respectively. Upgrading to the latest versions of Java (1.7) or Office will mitigate the threat, but not patch the underlying vulnerability, FireEye said.
“By breaking the exploit’s ASLR-bypass measures, they do prevent the current in-the-wild exploit from functioning,” FireEye said.
The hackers are installing the PlugX/Kaba RAT on infected computers; the sample FireEye reported was found on Feb. 13 and compiled the day before, an indication it was purpose-built for these targets. The RAT calls out to three command and control domains, one of which, wmi.ns01[.]us, has been used in other campaigns involving PlugX and the Poison Ivy RAT. Some of the older Poison Ivy samples were found in attacks involving Flash exploits and similar defense and policy websites, including the Center for Defense Information and another using a Java exploit against the Center for European Policy Studies.
Today’s out of band patch is the second one for Flash this month.
Microsoft last night released a Fix-It tool as a temporary mitigation for a zero-day vulnerability in Internet Explorer 10 being exploited by two hacker groups against the Veterans of Foreign Wars in the U.S. as well as a French aerospace manufacturer.
IE 9 also contains the same use-after free vulnerability enabling remote code execution, but it is not being exploited, Microsoft said. Microsoft has issued Fix-It tools for a number of zero-day vulnerabilities exploited in the wild in lieu of rushing out an out-of-band patch. The company’s next scheduled Patch Tuesday security updates release is March 11, which is likely the earliest an IE update would be released.
Microsoft has been patching its maligned browser almost monthly for more than a year, including a cumulative update on Feb. 11 that patched 24 vulnerabilities, including one that was publicly disclosed.
Researchers at FireEye reported the Veterans of Foreign Wars attack last week and attributed Operation SnowMan to the same groups behind DeputyDog and Ephemeral Hydra, both of which exploited IE zero-days in watering hole attacks to distribute remote access Trojans in order to spy on targets in government, military, manufacturing and other high value industries.
FireEye found an iframe on VFW.org that used a malicious Flash object to trigger the vulnerability in IE 10. Once on a compromised machine the Flash object downloads the RAT from a command server and executes it. As in the previous attacks, a variant of Gh0stRAT, was used in the SnowMan attacks and connected to some of the same IP addresses. The exploit used in the SnowMan attacks, FireEye said, can bypass memory protection features such as Address Space Layout Randomization (ASLR) and Data Execution Prevention (DEP) built into Windows.
Yesterday, researchers at Seculert reported that a second group of attackers was using the same vulnerability in the Microsoft browser to impersonate the French aerospace firm and compromise visitors to its website and steal credentials. Reuters reported yesterday that the manufacturer was Snecma, an engine manufacturer. The news agency cited a source who said the malware used against Snecma targeted domains belonging to the company.
Microsoft confirmed FireEye’s finding in a technical description of the vulnerability yesterday:
Seculert CTO Aviv Raff said the second group is likely not affiliated with the Operation SnowMan gang. While exploiting the same vulnerability, the group targeting the French manufacturer used different malware. It drops a backdoor and two executables that steal information from browsing sessions; that data is sent to a command and control server, which is hosted in the U.S. Raff said the malware is signed with a valid certificate belonging to Micro Digital Inc.
The malware changes the host files on infected machines and adds several secure domains for French aerospace companies. While some pharming campaigns have gone this route, Raff said this campaign has a different goal.
“The domains that were added to the hosts file by the malware provide remote access to the employees, partners, and 3rd party vendors of a specific multinational aircraft and rocket engine manufacturer,” Raff said. “The IPs added belong to the real remote access web servers and by adding the records to the hosts file the attackers ensured that there would be no DNS connectivity issues. Whenever the infected machines connect to the remote assets, the attackers are able to steal the sensitive credentials. This is the first time we have seen a malware change a hosts file for a purpose other than fraud perpetuated by pharming or for disabling access to specific websites.”
The Internet Bug Bounty program, a cooperative effort among security experts and vendors, paid out its first $10,000 bounty this week for a serious Flash vulnerability. The flaw, which Adobe fixed in December, was a serious one that has been used in targeted attacks.
Started in November, the Internet Bug Bounty is a system set up by security researchers and backed by Microsoft and Facebook to reward researchers who disclose bugs responsibly. Both Microsoft and Facebook have their own bug bounty programs, as do many other vendors, but they cover each company’s specific products. The Internet Bug Bounty program is meant to cover some core Internet technologies such as DNS and SSL, along with widely deployed software such as Flash, Google Chrome and Internet Explorer.
The group has been paying out some smaller bounties, but this is the first five-figure payout from the group, and it came for a serious vulnerability. Last week, Citizen Lab researchers reported that the Adobe Flash vulnerability was being used in targeted attacks against journalists. Interestingly, David Rude, the iDefense Labs researcher who received the bounty, didn’t report the bug directly to the IBB, but to Adobe. In fact, he didn’t even discover it himself; he saw attackers using exploits against it. Still, the IBB paid Rude the bounty as a reward for his work.
“The IBB culture is to err on the side of paying. Note that David did not discover the vulnerability himself; he discovered someone else using it. IBB culture is to look mainly at whether a given discovery or piece of research helped make us all safer. Our aim is to motivate and incentivize any high-impact work that leads to a safer internet for all,” Google security engineer Chris Evans, an adviser for the IBB, wrote in a blog post on the bounty payment.
“IBB does not want or need details of unfixed vulnerabilities — that would violate strict need-to-know handling. Once a public advisory and fix is issued, researchers or their friends may file IBB bugs to nominate their bugs for reward. Or, for important categories such as Flash or Windows / Linux kernel bugs, panel members keep an eye out for high impact disclosures and nominate on the researchers’ behalf. Because we care.”
The idea of paying researchers for bugs that they reported to other organizations–or didn’t discover directly–is a rare one in the world of bug bounties. Most companies that have such programs run them in order to get researchers to find vulnerabilities in their software, not in other companies’ software. But because the IBB is not tied to any one vendor, it has the ability to make decisions to pay researchers for work that, in Evans’s words, makes the Internet safer for everyone.
A new report from the SANS Institute warns that the push to digitize all health care records along with the emergence of HealthCare.gov and the general proliferation of electronic protected health information (ePHI) online will only exacerbate the security problems faced by those that store sensitive health care data. In other words, the report says, health care critical information assets are poorly protected and already compromised in many cases.
The “Health Care Cyberthreat Report” suggests that a compliance nightmare looms on the horizon and – more concerning yet – that the health care industry is now facing more exposure to attacks than ever before. The findings are particularly troublesome when you take into consideration that the health care industry has had a sordid history with IT security.
Sensitive health care information was never really all that secure to begin with. Health care data breaches were commonplace long before President Barack Obama signed the Patient Protection and Affordable Care Act into law. In fact, the SANS Institute opens its report with the stark and startling statistic that 94 percent of all healthcare organizations admit they have been the victim of a data breach at some point. That is an incredibly high number and, like all data breach statistics, it fails to account for those companies that have been breached and aren’t fessing up to it or just don’t know about it yet.
The report examined device-based and organizational sources of malicious traffic. Ironically, in terms of devices, most of the malicious traffic either passed through or was transmitted by security devices or applications. More specifically, virtual private networks enabled 33 percent of malicious traffic while 16 percent of malicious packets were sent by firewalls. Routers and enterprise network controllers together accounted for nine percent of such traffic. Other device types vulnerable to compromise included radiology imaging software, video conferencing systems, mail and VOIP servers, digital video systems, call contact software, and networked printers and fax machines.
“Today, almost every network attached device is shipped from its vendor in an insecure configuration with defaults that can be discovered easily through an Internet search,” said Barbara Filkins, a senior SANS analyst and healthcare specialist.
The report went on to note that network administrators are reliably changing easily guessable default credentials for router firewalls but they often overlook other network attached devices such as surveillance cameras, printers, and fax machines. As is so often the case, weak credentials and poorly configured security controls were among the leading causes for security incidents. Attackers can easily daisy-chain access from one poorly secured medical endpoint to more sensitive network devices.
The volume of Internet protocols examined within this targeted sample, the report claims, could be extrapolated to suggest that millions of health care organizations around the globe may already be exchanging malicious information.
“And theoretically,” the report says, “the effects of an ePHI compromise could potentially touch almost every person in the United States if the goal set by President Bush in 2004 that every American have an electronic health record by 2014 comes anywhere close to reality.”
Compliance, the report says, does not equal security. Existing best practices are not keeping up with attack techniques. Not only is patient data at risk, but so too is intellectual property, medical payment and billing information, and systems integrity. The findings showed that once a breach occurred, attackers regularly launched phishing and distributed denial of service attacks.
“This level of compromise and control could easily lead to a wide range of criminal activities that are currently not being detected,” said Filkins. “For example, hackers can engage in widespread theft of patient information that includes everything from medical conditions to social security numbers to home addresses, and they can even manipulate medical devices used to administer critical care.”
The costs incurred after breaches – said to include lawsuits, free credit monitoring services, stock fallout, and other expenses – are increasing as well. One Ponemon study from 2013 found that each exposed record could end up costing an organization some $233.
The report warns that many healthcare-related organizations – including one not named by SANS but described as a top three example of vulnerable medical organizations – believe their existing security controls, such as their firewall, are enough prevent compromise. In other words, organizations that have already been breached believe that they can not be compromised because of their existing security solutions.
The report examined all sorts of players in the healthcare industry, from small providers to research and teaching hospitals to clearinghouses, health plans, and pharmaceutical companies. Among these, the lion’s share of malicious traffic originated from health care providers (72 percent). Health care business associates – essentially businesses providing services that support that industry – followed in a distant second, accounting for 9.9 percent of malicious packets. Health plans (6.1 percent), pharmaceutical companies (2.9 percent), and health care clearing houses (0.5 percent) closed out the list. Other related health care entities accounted for the remaining 8.5 percent of malicious traffic.
The report ultimately says that a completely new approach to security will be needed to address these problems. Considering the explosion of newly connected devices, organizations must know what is on their network and find ways to secure these devices. Part of this assessment necessarily includes replacing older, vulnerable software and networked equipment. The report also urges organizations to think like attackers. A fax machine may seem benign, but an attacker could potentially monitor it to siphon off patient prescription information. Surveillance systems can be remotely monitored to determine ways of physically accessing areas with valuable data. Furthermore, vulnerability assessments and software patch management must be an ongoing process.
The SANS report was based on data collected by the Norse threat intelligence network between September 2012 and October 2013. Norse is a health care-industry focused provider of security and anti-fraud products. Their threat intelligence infrastructure consisted of a global network of sensors and honeypots that processed and analyzed hundreds of terabytes of daily data during the sample period. According the report, collected data included 49,917 unique malicious events, 723 unique malicious source IP addresses, and 375 U.S.-based compromised health care-related organizations.
Hosted two-factor authentication firm Duo Security acknowledged late last week that it discovered a vulnerability in its WordPress plugin (duo_wordpress plugin) that could allow a user to bypass two-factor authentication (2FA) on a multisite network.
Jon Oberheide, one of Duo’s founders, stressed last week that the problem only exists for users who have multisite WordPress setups with 2FA enabled on one of their sites. Users who deploy the plugin universally (and enable it universally) on their sites are not at risk.
If a user has 2FA set up on a site, they’ll be asked for primary credentials (a username and password) and the second factor information. But if there’s another site on the same multisite network, a user from the first site can go to the second site and only be asked for primary credentials. If they have those credentials, they’ll be authenticated, and then redirected back to their first site without being asked for 2FA. It’s bypassed entirely.
Oberheide described the vulnerability’s impact in bullet points in a blog entry last week in order to clarify some misinformation he said was being spread.
- Only WordPress “Multisite” deployments that have chosen to deploy the plugin on an individual site basis are affected.
- Normal WordPress deployments or Multisite deployments with the plugin enabled globally are NOT affected.
- The user must still present correct primary authentication (eg. username and password); only the second factor is bypassed.
Duo discovered the vulnerability and confirmed it internally earlier this month before issuing the advisory for it last week. At this time it affects version 1.8.1 and earlier of the product.
Oberheide writes that Duo is putting together a permanent fix and is working with WordPress but suggests a “core modification” may have to be made to the way the platform handles plugins to fix the issue.
The problem doesn’t solely exist on Duo’s plugins but is also present on those belonging to other two-factor vendors as well. Oberheide and company said they’ve informed vendors who are affected and several of them, like Duo, are working on fixes.
In the meantime Duo is encouraging users who have duo_wordpress deployed on multisite setups to enable the plugin globally, and then disable it for specific user roles until a fix is issued. Users who run a different WordPress two-factor authentication plugin may want to look into seeing if its vendor is planning a patch.
Android devices prior to version 4.2.1 of the operating system—70 percent of the phones and tablets in circulation—have been vulnerable to a serious and simple remote code execution vulnerability in the Android browser for more than 93 weeks.
Metasploit recently added an exploit module that targets the vulnerability, which was patched in 4.2.2 released one year ago. However, with carriers and device makers reticent to be quick with updates and security patches, close to three-quarters of the Android user base is at risk for attack. For some perspective, Android Central reports that KitKat, the latest version of Android, has yet to hit 2 percent adoption.
“I did a quick survey of the phones available today on the no-contract rack at a couple big-box stores, and every one that I saw were vulnerable out of the box,” said Rapid7 senior manager of engineering Tod Beardsley. “And yes, that’s here in the U.S., not some far-away place like Moscow, Russia.”
The exploit module, built by contributors Joe Vennix and Joshua Drake, could enable access to the device camera, location data, information stored on a SD card and even the user’s address book. Drake said he was recently able to get code execution on Google Glass using the exploit.
Rapid7 said an attacker would need to be man-in-the-middle on a device in order to exploit it, something its new exploit module simplifies. The company demonstrates the exploit in a video, which is triggered in this case by a malicious QR code the victim scans with their Android smartphone and opens a command shell for the attacker.
The best mitigation is to update Android to 4.2.2 or higher, but that isn’t always feasible for users. Device manufacturers and carriers control when updates are rolled out, despite the fact that Google is generally prompt with patches and updates.
The carriers and manufacturers have been under fire from privacy and security experts and even the U.S. Federal Trade Commission. Last April, the American Civil Liberties Union asked the FTC to investigate four major carriers, accusing them of deceptive business practices and knowingly selling defective phones to consumers that are shy on security updates and patches. The ACLU requested that the FTC force carriers to warn customers about unpatched vulnerabilities, allow customers with vulnerable phones to escape their contracts without early termination penalties, and provide that customers may exchange at no cost their phones for another that receives regular security updates, or return the phone for a full refund.
Last February, the FTC reached a damning settlement with device makers HTC America. The FTC forced HTC to enact expensive security enhancements that included regular security patches for Android devices, establish a security program that focuses on developer security, and submit to security assessments.
Cisco’s UCS Director infrastructure management product contains a set of default credentials that any remote attacker can exploit to take complete control of any vulnerable machine. The flaw is in UCS Director versions 18.104.22.168 and below.
The Cisco UCS Director software is designed to allow administrators to manage a variety of storage, networking, virtualization and other equipment. The company said that its internal security team discovered the vulnerability during testing of the product and isn’t aware of any public exploitation of the bug.
“The vulnerability is due to a default root user account created during installation. An attacker could exploit this vulnerability by accessing the server command-line interface (CLI) remotely using the default account credentials. An exploit could allow the attacker to log in with the default credentials, which provide full administrative rights to the system,” the Cisco advisory says.
The company has released a patch for the bug, pushed out as version 22.214.171.124 HOTFIX.
Cisco also released patches for vulnerabilities in a variety of other products, including the Cisco Unified SIP Phone 3905, Cisco IPS software and the Cisco Firewall Services Module software. The flaw in the SIP Phone 3905 is a vulnerability that allows a remote unauthenticated attacker to get root access to the phone. The issue is the result of an undocumented test interface in the TCP service on the phone, the kind of vulnerability that attackers love to get their hands on.
The flaws in the IPS software are all denial-of-service vulnerabilities and affect a variety of different Cisco products.
“The Cisco IPS Analysis Engine Denial of Service Vulnerability and the Cisco IPS Jumbo Frame Denial of Service Vulnerability could allow an unauthenticated, remote attacker to cause the Analysis Engine process to become unresponsive or crash. When this occurs, the Cisco IPS will stop inspecting traffic,” the advisory says.
“The Cisco IPS Control-Plane MainApp Denial of Service Vulnerability could allow an unauthenticated, remote attacker to cause the MainApp process to become unresponsive and prevent it from executing several tasks including alert notification, event store management, and sensor authentication. The Cisco IPS web server will also be unavailable while the MainApp process is unresponsive, and other processes such as the Analysis Engine process may not work properly.”
The Cisco Firewall Services Module software has a vulnerability that allows a remote, unauthenticated attacker to cause the system to crash and reload.
“The vulnerability is due to a race condition when releasing the memory allocated by the cut-through proxy function. An attacker could exploit this vulnerability by sending traffic to match the condition that triggers cut-through proxy authentication,” the advisory says.
Windows Error Reporting, also known as Dr. Watson reports, are Windows crash reports sent by default unencrypted to Microsoft, which uses them to fix bugs. The reports are rich with system data that Microsoft also uses to enhance user interaction with its products. Since, however, they are sent in clear text back to Redmond, they are also at risk for interception by hackers who can use the system data to blueprint potential vulnerabilities in order to ultimately exploit them.
While it may sound far-fetched, a German publication reported in late December that the U.S. National Security Agency was doing just that—using its XKeyscore tool to collect crash reports and target exploits accordingly.
The only mitigation is that Windows administrators must manually opt-out of sending crash reports back to Microsoft, something that isn’t happening on a large scale; Microsoft receives billions of these reports from 80 percent of its installed user base.
Security company Websense, in December, urged administrators to be proactive about these reports and use them as a first step in detecting advanced attacks against an organization since exploits generally cause applications to behave abnormally. The company released a report today that demonstrates exactly how to do that and said it was able to find advanced attacks in progress against a major cellular network operator and a Turkish government website. It also threw back the covers on another campaign targeting point-of-sale systems with a variant of the Zeus Trojan built to infect POS devices and backends.
The key is to differentiate between crashes that are indicative of exploits and those that are merely crashes due to a programming bug. For example, crashes that happen outside of programmable memory space could be an indication of an active exploit that enables remote code execution.
“It goes from a breadcrumb to something interesting,” said Alex Watson, director of security research at Websense.
Watson said his company collected 16 million Dr. Watson reports during a four-month period, looking for system crashes caused by previously unseen exploits against CVE-2013-3893, a use-after-free vulnerability in Internet Explorer 6-11 that was used in the Deputy Dog watering hole attacks against a number of companies in high-profile industries in Asia. Those failed processes leading to system crashes enabled Websense to fingerprint the damage caused by an exploit attempt.
Of the 16 million reports, five crash reports in four organizations matched the fingerprint Websense built that included memory locations where IE might crash if it were attacked using a CVE-2013-3893 exploit. As it turned out, both organizations were hit by the HWorm remote access Trojan used in targeted attacks. The RAT beaconed from both organizations at the same time as the failed exploit happened, Watson said.
“We were able to link the failed exploit attempt to the RAT to get some indicator of common techniques,” Watson said.
Websense said it also collected crash data from point-of-sale applications similar to those compromised in the Target and Neiman Marcus breaches by RAM scraper malware which steals credentials and payment card data from the device before it is encrypted and sent to the payment processor. A majority of the crash reports Websense used were from a clothing retailer in the Eastern United States, it said, which was infected with a variant of Zeus that zeroes in on POS devices and applications. Watson said the malware attempted to connect to command and control servers at the same time the applications crashed.
“Most exploits today force applications to behave in a way they’re not supposed to and they end up executing shell code and things like that,” Watson said. “With Microsoft rolling out advanced stuff like ASLR making it really hard for attackers to successfully execute exploits, there’s a much higher chance they’re going to fail. Once attackers gain a foothold in the network and make it past the perimeter-based security system, there’s a mindset that their content is no longer monitored by IPS systems and you’ll see attackers use the most direct path with exploits toward their target, thinking they’re not going to be monitored. Again, there’s a high chance of crashing applications on the network.”
There are at least two different groups running attacks exploiting the recently published zero day vulnerability in Internet Explorer 10, and researchers say one of the groups used the bug to impersonate a French aerospace manufacturer and compromise victims visiting the spoofed Web page. The attackers also used a special feature of their malware to change portions of the Windows host file to steal credentials when users visit secure sites.
Last week, researchers at FireEye identified a compromised page on the site of the Veterans of Foreign Wars and discovered that it was being used to exploit visitors using the IE zero day. The company said that the attack bore some resemblance to previous operations from a known group that also incorporated zero days. However, researchers at Seculert said that there also appears to have been a second, separate attack by an unaffiliated group of attackers.
“Our analysis reveals that a totally different malware than ZXShell, the culprit as identified by FireEye, was used and has the following capabilities: backdoor (Remote Access Tool), downloader, and information stealer (Figure 2). The malware drops 2 files: MediaCenter.exe – a copy of itself, and MicrosoftSecurityLogin.ocx, which is registered as an ActiveX – used by malware to steal information from browsing sessions. Once installed the malware communicates with a criminal command and control server (C&C). Seculert’s investigation has concluded that the C&C is hosted on the same server as the exploit, located in the United States. Moreover, typical red flags would remain unraised as the malware itself has a valid digital certificate. The certificate belongs to MICRO DIGITAL INC. and is valid since March 21, 2012,” Aviv Raff, CTO of Seculert, wrote in an analysis of the attack.
The attackers are using the malware to change the host files on infected machines and add in several secure domains for French aerospace companies. This kind of behavior has been seen in the past from attackers running so-called pharming campaigns, in which compromised machines are used to send traffic to phishing sites. This attack group is using the host-file modification for a different reason, though.
“But what is disturbing about this attack is that the same behavior accomplished a completely different goal. The domains that were added to the hosts file by the malware provide remote access to the employees, partners, and 3rd party vendors of a specific multinational aircraft and rocket engine manufacturer. The IPs added belong to the real remote access web servers and by adding the records to the hosts file the attackers ensured that there would be no DNS connectivity issues. Whenever the infected machines connect to the remote assets, the attackers are able to steal the sensitive credentials. This is the first time we have seen a malware change a hosts file for a purpose other than fraud perpetuated by pharming or for disabling access to specific websites,” Raff said.
Given the differences in the attack methodology and the malware used, as well as the C&C infrastructure, Raff said the logical conclusion is that there are two different groups using the IE 10 0-day.
“The main differences in this attack lead us to conclude that the group behind the attack is different than previously hypothesized,” Raff said.
Image from Flickr photos of Jeremy Seitz.