Dennis Fisher and Mike Mimoso talk about the major stories from the last couple of weeks, including the changes to the Microsoft bug bounty program, the new Internet bug bounty, the Apple transparency report and a new paper on a weakness in Bitcoin.http://threatpost.com/files/2013/11/digital_underground_133_.mp3
The Wysopal name has been on vulnerability advisories for better than 20 years now, and it doesn’t look like that is going to end anytime soon. But the name on those advisories in the future may be Renee rather than Chris Wysopal.
Chris, one of the founding member of the L0pht hacking collective and now the CTO and CISO at Veracode, helped shape the way that vulnerabilities were reported to vendors and disclosed to users and has been a part of some of the industry wide efforts to define disclosure guidelines and vendor responses. While at @stake, Wyspoal and the rest of the research team were at the forefront of the movement that sought to pressure vendors into working closely and honestly with security researchers who disclosed bugs to them.
Now, his daughter Renee is following in his footsteps. During a summer internship at Veracode, Renee, a sophomore at Trinity College in Hartford, Conn., took part in the company’s annual hackathon, a days-long event in which all employees are encouraged to participate and work on a hacking project. Renee, who was working in the human resources department, decided to work with her dad on a project to find a vulnerability that would qualify for Facebook’s bug bounty.
“I’d seen on Twitter that Facebook would pay a bounty, so I immediately of doing something with my dad and he said we should do it together since I’m in college and Facebook is very prevalent in my age group and my dad is a hacker, so we thought it would be fun to bring it together,” she said.
Because Renee was a security neophyte, they started at the beginning. Chris began by showing her what he would do to tackle a Web app like Facebook.
“I started by showing her how to use a Web proxy and view source and then modify the different parameters outside of the Web interface so you could attack the Web app,” Chris said. “So she went off started thinking about where there could be a bug and she gravitated to one of the hairier parts of Facebook, which is the privacy and permission model.”
So Renee began tinkering with the Facebook app and within a few days she started to focus on the feature that enables users to block other users from their pages. It’s a much-used privacy feature and the idea is to allow users to keep people they’re no longer interested in interacting with from being able to post messages on their profiles. Renee happened to notice that there were a bunch of messages on her page from someone she had blocked some time ago.
“I thought there must be some kind of weakness there,” she said, “if Facebook was still allowing her to have her name all over my profile. I think I figured it out the next day. I read all about the Facebook white hat program and they ask you to use test accounts. I was using a test account, so once that worked I was excited but i thought maybe it was just a flaw in the test account.”
She then tested it out on a friend’s account and found that it still worked. After blocking herself on her friend’s account, she was still able to get messages through to her friend’s account. So, with the help of her dad, Renee wrote up the vulnerability report and submitted it to Facebook’s bug bounty program in August. By the time she found the bug, tested it and submitted the report, the Veracode hackathon was nearly over and it was time for Renee and Chris to deliver their report on what they’d achieved. But they didn’t yet have an answer from Facebook on whether the bug qualified for a bounty.
“It was disappointing because we had to give a report and we hadn’t gotten a reply yet,” Renee said. “All we could say was, we submitted this report. At some point they said, we’ll get back to you later.”
Later turned out to be more than two months, but when the answer came, it was good news: Renee’s find had earned her a $2,500 reward from Facebook.
“It was definitely a surprise. I went back to school and sort of forgot about it and was just focused on school,” she said. “I definitely thought I wasn’t going to find anything, but I figured my dad would.”
Renee, who hasn’t declared a major yet but is leaning toward political science, said she really didn’t have a good idea of what her dad did when she was younger.
“I just always remember him being in his home office on his computer typing weird characters. Even when I was six or seven, I’d ask what he was doing and he’d say he was hacking. I had no idea what that was,” she said. “It was probably only in the last few years that I realized how cool the stuff he did was, after reading his Wikipedia page. He’s pretty modest about it.”
So has her foray into hacking sold her on following her dad’s path?
“For now I think it’s a one and done type thing because it’s such a frustrating process. In some ways I feel like I got lucky to find that,” she said.
Arbor Networks’ Security Engineering and Response Team (ASERT) has discovered a denial-of-service tool specifically designed to target the U.S. government’s healthcare enrollment marketplace, Healthcare.gov.
Healthcare.gov is established by the Affordable Care Act (ACA) in the United States, perhaps better known by the neologism “Obamacare.” The ACA is considered by many to be U.S. President Barack Obama’s crown achievement, aiming to provide health insurance to millions of uninsured American citizens. The rollout of the website that supports the ACA has been marred by a seemingly endless and humiliating array of technical problems.
As of yet, ASERT has no information to indicate that any of the downtime experienced on Healthcare.gov is the result if a DoS or distributed denial of service (DDoS) attack.
However, the DoS tool, primarily written in the Delphi programming language, has emerged, and it’s singular purpose is to knock the healthcare exchange offline. The tool reportedly performs layer seven requests to get to the webpage, alternating between healthcare.gov and that same website’s “contact us” page.
Fortunately for many ACA proponents already embarrassed by the exchange’s problematic beginnings, ASERT claims the tool is unlikely to succeed in its attempts to make Healthcare.gov unreachable because of its non-distributed architecture and other limiting factors.
According to the report, the application is available for download from a number of sources and is being distributed on social media networks as well.
“ASERT has no information on the active use of this software,” Arbor Network’s Marc Eisenbarth wrote on the ASERT blog. “ASERT has seen site-specific denial of service tools in the past related to topics of social or political interest. This application continues a trend ASERT is seeing with denial of service attacks being used as a means of retaliation against a policy, legal rulings or government actions.”
A patch for the Windows zero-day disclosed this week will not be ready in time for next week’s monthly Patch Tuesday release, Microsoft said today.
The vulnerability in several Windows and Office versions is being exploited in targeted attacks against Windows XP systems running Office 2007. The attacks are limited so far to the Middle East and Asia. Microsoft released a Fix-It tool as a stopgap measure until a patch is released out of band or with the December security updates.
Microsoft, meanwhile, will release eight security bulletins next week, three of them critical, including another Internet Explorer roll-up going all the way back to IE 6. The other two critical bulletins are for flaws in Windows; this is the first set of patches in months that does not affect a Microsoft server component.
As for the zero-day bug, it is in the Graphics Design Interface, or GDI+, found in Office, Windows and Lync. Microsoft clarified some confusion over the conditions in which the vulnerability exists. In Office, for example, 2003 and 2007 are affected regardless of the underlying OS. Office 2010 on Windows XP or Windows Server 2003 is vulnerable; Office 2013 is not.
Vista and Windows Server 2008, meanwhile, contain the vulnerable GDI+ component but are not being attacked, Microsoft said. Other Windows versions are not impacted unless running a version of Office or Lync that is impacted. All supported versions of Lync support the vulnerable component, but also are not under attack.
The attacks are carried out via infected Word documents sent via email attachments.
“If the attachment is opened or previewed, it attempts to exploit the vulnerability using a malformed graphics (TIFF) image embedded in the document. An attacker who successfully exploited the vulnerability could gain the same user rights as the logged on user,” the Microsoft advisory says.
Attackers can gain the ability to remotely inject code on a compromised machine.
Yesterday, new details emerged from researchers at AlienVault. Once attackers have a presence on the machine, the malware downloads a RAR file that connects to the attacker’s server and downloads additional malware including a keylogger, backdoor and component that steals productivity files such as spreadsheets, Word docs, Power Point and PDF files.
Researchers at Kaspersky Lab said this is not the first exploit for a TIFF vulnerability and also found additional malicious behavior.
“The new 0day uses malformed TIFF data included in Office documents in order to run a shellcode using heap spray and ROP techniques. We have already researched some shellcodes – they perform common actions (for shellcodes): search API functions, download and launch payload,” said Vyacheslav Zakorzhevsky, head of the vulnerability research group at Kaspersky. “We took a glance at a downloaded payload – backdoors and Trojan-spies.”
Researchers at FireEye, meanwhile, said today that another group is using the exploit to drop the Citadel banking malware onto compromised machines, bringing a criminal element into the equation alongside targeted espionage attacks. FireEye said the Arx group behind these attacks have had the exploit longer than the group using it in targeted attacks. FireEye said 619 targets have been compromised, most in India.
A bounty program begun by a bevy of industry heavyweights, including Microsoft and Facebook, will pay good money to white hats, researchers and even aspiring young hackers who find bugs in any of a dozen technologies central to the vitality and trustworthiness of the Internet.
Dubbed the Internet Bug Bounty, the program’s aim is to make the Internet more secure and incentivize researchers to turn over bugs rather than exploit them. The bounty will pay out cash reward, with minimum bounties ranging from $1,500 to $5,000, and the sponsor companies will act as intermediaries with the affected vendors.
“The Internet Bug Bounty is accessible to a broad pool of security researchers and has the potential to improve security for a wide variety of technology users,” said Katie Moussouris, senior security strategy lead, Microsoft Trustworthy Security. “This bounty is a great way to support coordinated disclosure of critical vulnerabilities in shared components of the Internet stack.”
Along with Microsoft and Facebook, researchers from Google, iSEC Partners and Etsy make up the panel.
The bounty’s website, hackerone.com, lays out its disclosure policy online, urging researchers to promptly report vulnerabilities and support further investigation into those reports, while asking vendor or open source response teams to transparently address vulnerabilities, publicly recognize bug-finders and never legally threaten researchers.
Researchers must report vulnerabilities through the HackerOne platform and the details will not be made public for 30 days, giving the affected response team time to remediate. That deadline can be extended to as many as 180 days, the bounty’s rules state, but only in certain unusual cases.
If the bug in question is being actively exploited, the bounty team reserves the right to publicly provide remediation details; no bug details remain private beyond six months.
“If a Response Team is unable or unwilling to issue a patch, the contents of the Bug Report will become publicly available according to the timelines provided,” the rules state. “In no case will the details of a vulnerability be kept non-public beyond 180 days. We believe transparency is in the public’s best interest in these extreme cases.”
The highest payouts are for application sandbox escape vulnerabilities in products such as Chrome, Internet Explorer, Adobe Reader and Flash on Windows 7, Linux and OS X, and core Internet technologies such as DNS, SSL or crypto protocols.
Sandbox bypass exploits have been all the rage this year with hackers taking advantage of weaknesses in Java and other third-party components and browsers to gain control over the underlying system.
“The specifics of these [escape] techniques will differ between implementations but typically manifest as a kernel vulnerability, broker vulnerability or logic error,” the bounty panel said, adding that implementation bugs should be reported to the vendor and are not eligible for a bounty. “Your submission should include why you believe the bug is external to the application itself (e.g., a kernel bug).”
As for Internet bugs, vulnerabilities should be widespread, vendor agnostic, severe and new, the bounty panel said.
“The Internet Bug Bounty panel will award public research into vulnerabilities with the potential for severe security implications to the public,” the panel said. “Simply put: Hack all the things, send us the good stuff, and we’ll do our best to reward you.”
The other technologies in scope are: OpenSSL ($2,500 minimum bounty); Python ($1,500); Ruby ($1,500); php ($1,500); Rails ($1,500); Perl ($1,500); Apache httpd ($500); Nginx ($500); Phabricator ($300); and Django (unannounced).
As the TrueCrypt audit chugs along toward a deterministic, clean build of the open-source encryption software and a palatable license, the organizers have brought prominent security and legal experts aboard as a technical advisory team.
The experts will not only provide guidance on the current audit, but could help evolve this project into a framework for examining other open source security tools.
The full list of luminaries is expected to be made public shortly, along with a new website housing details and progress on the TrueCrypt audit, but the list already includes noted cryptographers Bruce Schneier and Jean-Philippe Aumasson, as well as security expert Moxie Marlinspike, who has done extensive research on secure protocols, privacy and cryptography, as well as Marcia Hofmann, a digital rights attorney and Fellow at the Electronic Frontier Foundation.
“We really seemed to have sparked something here bigger than what we expected,” said Kenneth White, a security expert who along with Johns Hopkins University professor and cryptographer Matthew Green helped get the TrueCrypt audit off the ground. “The thinking is that maybe what we could do is use the TrueCrypt audit as an example of sort of how we can do an open source evaluation and use it to help refine how we could do this in a more generic way for other projects.
“I certainly get the impression from several people who are involved that we’d hate for this to be a one-off thing. We’d like to take this momentum and maximize it for other projects.”
The TrueCrypt audit, to date, has raised close to $60,000, smashing the team’s initial goal of $25,000 in the first four days of fundraising. Under the bigger umbrella of NSA surveillance and the alleged compromising of popular encryption algorithms by the spy agency, the TrueCrypt audit hopes to answer some potentially troubling questions about the software. Of particular concern is documented odd behavior by the Windows version, which is compiled from binaries and not source code, unlike the Linux and Mac OS X versions. The Windows version, therefore, cannot be compared to source code, and many have wondered whether it has been backdoored at some point.
Coupled with the fact that the identity of its developers isn’t clear, and the burdensome license governing TrueCrypt’s use, the audit is being welcomed by the technologists and experts alike. White, for example, said there have been contributions to the audit fund from close to 1,000 people in 30 countries. People from 70 countries have visited the current website, istruecryptauditedyet.com, which has generated two million hits since it went live. White also told Threatpost that the audit has been granted non-profit status by the state of North Carolina and has filed a 501(c)3 application with the IRS for non-profit status.
“We’ve got some fairly ambitious ideas and we’ll start with TrueCrypt for now,” White said, adding that there have been discussions and debate about how much to open up the audit to other entities beyond professional software firms, such as academics or the security community at large. “I suspect there’s going to be some sort of balance because we’ve got so many different people looking at this. Some people are going to only be satisfied if a professional firm looks at it. It’s crazy the range of people who have offered to help either financially or with their services.”
As for the selection of a professional software firm, White said there are a couple of options in play, including separating the cryptanalysis from system engineering as TrueCrypt is audited in order to cover all the bases.
“When you do a whole volume boot from Windows, there’s a lot of stuff going on that’s got very little to do with crypto, just in terms of implementation,” White said. “I think there are 75,000 lines of code, including assembler, C and C++ on three different platforms. There’s an awful lot you have to bring to bear and there just aren’t many people who are wizards at, say, Windows boot process, and OS X and Linux. That’s what we’re trying to figure out. Have people with expertise in all, but it’s probably going to end up being a mix of volunteers, academics and professionals.”
If this ends up being a longer-term initiative, other open source projects as popular as TrueCrypt (28 million downloads) could be in the crosshairs of a similar review.
“Matt and I had talked about it and in some of the private conversations, the suggestion was why not make this a test case for how one could do a proper open security analysis,” White said. “Certainly several people had discussed that before we crystalized the idea.”
This article was updated at 3:30 ET with corrections regarding the audit’s non profit status.
In the wake of the publication of a new academic paper that says there is a fundamental flaw in the Bitcoin protocol that could allow a small cartel of participants to become powerful enough that it could take over the mining process and gather a disproportionate amount of the value in the system, researchers are debating the potential value of the attack and whether it’s actually practical in the real world. The paper, published this week by researchers at Cornell University, claims that Bitcoin is broken, but critics say there’s a foundational flaw in the paper’s assertions.
Bitcoin is a decentralized cryptocurrency that depends upon the honesty of its users to publish each of their transactions in a central, public ledger. The Cornell paper, written by Ittay Eyal and Emin Gun Sirer, says that if a group controls one third of the Bitcoin mining resources, it can then begin mining “selfishly” mine blocks and keep them secret from the rest of the miners. Then, when the chain that this group has mined is longer than the public one, it can publish its chain and have the authoritative one, since Bitcoin will always ignore the shorter block chain when there’s a fork.
“Ittay Eyal and I outline an attack by which a minority group of miners can obtain revenues in excess of their fair share, and grow in number until they reach a majority. When this point is reached, the Bitcoin value-proposition collapses: the currency comes under the control of a single entity; it is no longer decentralized; the controlling entity can determine who participates in mining and which transactions are committed, and can even roll back transactions at will. This snowball scenario does not require an ill-intentioned Bond-style villain to launch; it can take place as the collaborative result of people trying to earn a bit more money for their mining efforts,” the researchers wrote in a blog post on their paper.
“Conventional wisdom has long asserted that Bitcoin is secure against groups of colluding miners as long as the majority of the miners are honest (by honest, we mean that they dutifully obey the protocol as proscribed by pseudonymous Nakamoto). Our work shows that this assertion is wrong. We show that, at the moment, any group of nodes employing our attack will succeed in earning an income above their fair share. We also show a new bound that invalidates the honest majority claim: under the best of circumstances, at least 2/3rds of the participating nodes have to be honest to protect against our attack. But achieving this 2/3 bound is going to be difficult in practice.”
The idea of a majority of Bitcoin miners joining together to dominate the system isn’t new, but the Cornell researchers say that a smaller pool of one third of the miners could achieve the same result, and that once they have, there would be a snowball effect with other miners joining this cartel to increase their own piece of the pie. However, other researchers have taken issue with this analysis, saying that it wouldn’t hold together in the real world.
“The most serious flaw, perhaps, is that, contrary to their claims, a coalition of ES-miners [selfish miners] would not be stable, because members of the coalition would have an incentive to cheat on their coalition partners, by using a strategy that I’ll call fair-weather mining,” Ed Felten, a professor of computer science and public affairs at Princeton University and director of the Center for Information Technology Policy, wrote in an analysis of the paper.
“Recall that in the ES attack, a team of ES-miners is racing against a team of ordinary miners, to see who can create a longer block chain. A fair-weather miner pretends to be part of the coalition of ES-miners, but in fact secretly switches teams so that mines for the ES-mining team if that team is ahead in the race, and it mines for the ordinary mining team otherwise. It turns out that every block that the fair-weather miner creates is guaranteed to end up on the winning chain. So the fair-weather miner does better (i.e. gets a better reward) than it could get by playing exclusively on either team.”
However, many Bitcoin miners collaborate in pools or guilds that share resources and rewards. Those groups sometimes require that their users produce some of the work that they’ve done in order to prove that they’re actually participating in the mining and should get some of the eventual Bitcoin rewards. That integrity check could mitigate against the potential emergence of the fair-weather miners.
Matthew Green, a cryptographer and research professor at Johns Hopkins University, said that the Cornell paper raises some interesting points but that it’s difficult to know how real-world Bitcoin users would act if such a cartel ever emerged.
“Ed takes aim at this conclusion by pointing out that these coalitions won’t be stable. In real life, self-interested individual miners will hop back and forth between selfish and honest mining to suit their own purposes. That hopping acts as a buffer against further snowballing,” Green said by email.
“I’m very much looking forward to hearing the authors’ response. I think they both have good points, but they’re both working with simplified models of the real world. What I will say is that Bitcoin isn’t so easy to model. For one thing, it’s not really collection of rational nodes working in their own self interest. In fact, Bitcoin today is largely run by people contributing free labor without compensation — storing the block chain, routing transactions, etc. A truly self interested collection of nodes would act very differently. So these results are unlikely to mean much today.”
In the end, Green said, more analysis is needed of the Bitcoin system and the potential vulnerabilities that may lie within it.
“I think it’s fantastic that researchers are finally analyzing Bitcoin as a system. That doesn’t mean we’re likely to see practical attacks anytime soon,” Green said. “The fact that Bitcoin works is pretty amazing. We shouldn’t be surprised if there are a few kinks to work out.”
Although the technology underlying Bitcoin is vital, there a number of other factors that could contribute to problems with the system.
“As with any other scientific research, the one on the alleged Bitcoin flaw has to be reviewed and analyzed by the community. But we already see that the nature of this ‘vulnerability’ lies in the field of economics rather than computer technology. Even if some group of people (or, more likely, a powerful government entity with almost infinite computing power) could gain a certain amount of control over Bitcoin mining process, that would not necessarily mean the demise and fall of the digital currency,” said Sergey Lozhkin, Senior Security Researcher at Kaspersky Lab.
Image from Flickr images of BTC Keychain.
Metasploit creator and Rapid7 CSO HD Moore today disclosed seven zero-day vulnerabilities in IPMI firmware from vendor Super Micro. The security issues were reported to the vendor in August, however the vendor, beyond acknowledging receipt of the vulnerabilities never communicated with Rapid7 regarding a fix.
A Super Micro representative told Threatpost that this was an “old story” and that the issue had been resolved. A request for further comment from a Super Micro project manager was not returned in time for publication and the availability of patches could not be confirmed.
“The vendor has been pretty quiet on this; they acknowledged receipt of the vulnerabilities, but that’s the long and short of it. They’ve said nothing to us about a patch,” said Rapid7 senior manager of engineering Tod Beardsley. “I imagine they’ll be patching silently, but honestly if they do issue patches and make a lot of noise about it, nobody updates these things. It’s embedded hardware that sits on more traditional hardware, but like anything embedded, nobody gets patches for these. I worked in IT for years, and I think I updated BIOS once.”
IPMI, or intelligent platform management interface, are tiny computers that sit on a motherboard that are used by IT administrators in large data centers for remote management of servers or remote BIOS maintenance. They’re mostly present in rack-mount servers, and are cumbersome to update because they often require physical access to the hardware, and in a service provider environment, for example, there could be hundreds of these embedded devices present.
Beardsley said that a Project Sonar scan for the IPMI firmware in question, version SMT_X9_226, found 35,000 of them online. He estimates that number likely represents less than 10 percent of the total devices in use.
While these are previously unreported vulnerabilities—Metasploit exploit modules are in the works, Moore said—exploiting them requires a bit of understanding on the attacker’s part.
“You definitely have to know what you’re doing; it’s a different architecture,” Beardsley said. “Most exploit developers and vulnerability researchers are familiar with Intel x86 or Intel 64-bit, or ARM because Android runs on ARM, so it’s popular. But these things run on pretty unusual hardware for infosec guys. Getting reliable exploitability is difficult. I don’t expect a worm in the next six hours or anything. We’ve been sitting on these for a while, trying to get reliable exploits. We can crash all day long, but that’s useless. Getting reliable exploits is tricky. We’ve been going back and forth between emulated environments and real environments and things that seem to work great in emulated environments just fall over on the physical device. It will take some effort for sure.”
However, if an attacker is able to exploit one of the IPMI vulnerabilities disclosed, they would not only be on the network, but could take control of the server in question at a BIOS level.
Of the seven vulnerabilities disclosed, the most serious involve static private encryption keys hardcoded into the firmware for both the Lighttpd Web server SSL interface and the Dropbear SSH daemon, Moore said.
“An attacker with access to the publicly available Supermicro firmware can perform man-in-the-middle and offline decryption of communication to the firmware,” Moore said in a blogpost.
Beardsley said that while it’s possible for the admin to update the SSL key for the Web interface, it does not appear possible to update the SSH key.
“So once you know the private key, which you can easily extract from the firmware, it’s game over and I can SSH to any of these devices,” he said.
Rapid7 also reported that the firmware contains two hardcoded sets of credentials for the OpenWSMan interface, one for the digest authentication file that cannot be changed by the user and acts essentially as a backdoor, while the other involves the basic authentication password file stored on the firmware. Moore said that changing the admin account password still leaves the OpenWSMan password still set to admin.
Moore also disclosed two buffer overflow vulnerabilities in each of the login.cgi, close_window.cgi and logout.cgi CGI applications, as well as a directory traversal vulnerability in the url_redirect.cgi CGI application and numerous unbounded strcpy(), memcpy() and sprint() calls by more than 65 other CGI applications available through the Web interface.
“These things are real computers and have valuable file systems,” Beardsley said. “You can limit yourself to just this device that lives on the motherboard, or in a lot of cases, you can use them to manage the server. That’s what they’re there for, to manage the server. It’s a pretty short step from getting onto to the IPMI to getting onto the server proper.”
Cisco has patched a number of vulnerabilities in several separate products, including a serious remote code execution flaw in its Wide Area Application Services Mobile software that could allow an attacker to take complete control of a vulnerable device.
Cisco also has patched a vulnerability in its TelePresence VX Clinical Assistant video conferencing system for health care environments. The fix closes a hole that enabled an attacker to login to the admin account using a blank password.
“A vulnerability in the WIL-A module of Cisco TelePresence VX Clinical Assistant could allow an unauthenticated, remote attacker to log in as the admin user of the device using a blank password,” the Cisco advisory said.
“The vulnerability is due to a coding error that resets the password for the admin user to a blank password on every reboot. An attacker could exploit this vulnerability by logging in to the administrative interface as the admin user with a blank password.”
Meanwhile, the WAAS Mobile vulnerability affects all versions of the software prior to 3.5.5, and the company has released a new version that includes a fix for the bug.
“The vulnerability is due to insufficient validation of user-supplied data in the body of an HTTP POST request. An attacker could exploit this vulnerability by crafting an HTTP POST request for content upload that would result in an uncontrolled directory traversal. An exploit could allow the attacker to execute arbitrary code on the WAAS Mobile server with the privileges of the IIS web server,” the Cisco advisory says.
“The vulnerable component belongs to the web management interface; however, in a deployment where more than one Cisco WAAS Mobile server is used then all the servers are vulnerable, not just the one performing the Manager role and hosting the web management interface.”
However, Cisco said that the flaw doesn’t affect any clients running the software, only servers.
The company also released a fix for a vulnerability in the SIP implementation in its IOS software, which runs on many of its servers and other devices. The bug could allow an attacker to cause a denial-of-service condition on a vulnerable device.
“The vulnerability is due to incorrect processing of specially crafted SIP messages. An attacker could exploit this vulnerability by sending specific valid SIP messages to the SIP gateway. An exploit could allow the attacker to trigger a memory leak or a device reload,” the advisory says.
Image from Flickr photos of Prayitno.
Security researcher Henry Hoggard recently discovered a cross site request forgery (CSRF) vulnerability in Twitter’s “add a mobile device” feature, giving him the ability to read direct messages and tweet from any account.
Hoggard, a security researcher at MWRInfosecurity, told Threatpost via email that he found the bug in his spare time and reported it to Twitter. Twitter then resolved the vulnerability within 24 hours. Hoggard then posted the details on his personal blog.
A CSRF vulnerability forces a user to execute unwanted actions in an application or service for which that user is already authenticated. These attacks generally involve some social engineering such as sending an email with a malicious attachment. When successful, an attacker can wrest control of a user’s account, which could have a wide range of impacts depending on the application in question and the level of rights granted to the targeted user.
In this case, Hoggard found the CSRF bug in a Twitter feature that gives users the ability to add a mobile device to their account and control that account via SMS using the mobile device added.
By creating a CSRF page, Hoggard realized that an attacker could enter his own phone number and network to the victim’s account. Of course, Twitter built an authentication token into the feature that should have prevented this sort of attack. Unfortunately, Twitter was not actually checking to make sure that the token-value was correct, which means that an attacker could enter any value whatsoever for the token and still get validated.
Hoggard claims that an attacker could compromise a victim account by sending the targeted user a link to a malicious website containing his exploit code (the CSRF page plus a link to Twitter’s “add a device” activation page).
If the user clicks the link, he or she will be unwittingly initiating the process to authenticate the attacker’s device. Twitter, therefore, would be waiting for someone (in this case the attacker) to text “GO” to the mobile short code number that activates the device.
Once this is done, the attacker would receive a device activation notification and would now have the ability to send and receive tweets by texting his or her desired message to the same mobile short code number.
Users with the No-Script extension installed on their browser would not have been affected by this vulnerability even before Twitter fixed it, according to the researcher.
Twitter did not respond to a request for comment, but Hoggard provided communication logs between himself and the social network’s application security team, noting that Twitter fixed the bug incredibly quickly. The logs show that Twitter received his bug report on the morning of November 3, requesting that Hoggard not publicize his findings immediately. Early that same afternoon, the logs indicate that Twitter had resolved the issue.
CryptoLocker is a devious evolution of now-familiar ransomware schemes in which the malware encrypts files it finds on a number of network resources and demands a ransom for the decryption key.
US-CERT issued an advisory today warning businesses and consumers of the risks presented by CryptoLocker, which has been on the radar of security experts since late October. US-CERT said infections are on the rise and urge victims not to pay the ransom, instead report it to the FBI’s Internet Crime Complaint Center.
Victims, meanwhile, have three days to make their payments to the attackers, either via MoneyPak or Bitcoin.
“Some victims have claimed online that they paid the attackers and did not receive the promised decryption key,” the US-CERT advisory said.
CryptoLocker is spreading via a number of phishing campaigns, including some from legitimate businesses, or through phony Federal Express or UPS tracking notifications. Some victims said CryptoLocker has appeared after a separate botnet infection, US-CERT said.
The malware sniffs out files in a number of network resources, including shared network drives, removable media such as USB sticks, external hard drives, network file shares and some cloud storage services.
“If one computer on a network becomes infected, mapped network drives could also become infected,” the US-CERT advisory warns, adding that victims should immediately disconnect their computers from their wired or wireless networks immediately upon seeing the red-screen notice put up by CryptoLocker that provides details on how to recover the encrypted files.
Once the malware latches on to a victim machine, it connects to the attacker’s command server and stores the asymmetric encryption key that would unlock the victim’s files.
Costin Raiu, director of the Global Research and Analysis Team at Kaspersky Lab, said CryptoLocker uses a domain generation algorithm giving the malware up to 1,000 possible domain names from which to connect to its command and control infrastructure. Raiu added that Kaspersky sinkholed three domains and monitored more than 2,700 domains trying to contact those domains during a three-day period in mid-October with most of the victims in the U.S. and Great Britain.
Malware such as CryptoLocker is not without precedent. The GPCode malware used RSA keys for encryption, starting with 660-bit RSA before upgrading to 1024, “putting it perhaps only in the realm of NSA’s cracking power,” Raiu said.
“CryptoLocker uses a solid encryption scheme as well, which so far appears uncrackable,” Raiu added.
Meanwhile, security blog Krebs on Security reported today that the attackers behind CryptoLocker may be softening on their imposed 72-hour payment deadline. Since the attackers require payment through third parties, options that victims may not be familiar with, it could be that the attackers are losing out on some money.
“They decided there’s little sense in not accepting the ransom money a week later if the victim is still willing to pay to get their files back,” Lawrence Abrams of BleepingComputer.com told Krebs. Abrams added that while CERT and some vendors may be advising victims not to pay, some are caving in because they cannot afford to be without their lost files for a significant amount of time.
When Android phone manufacturers tweak devices and customize phones with special software, apps and code, it has a direct effect on the security of each device. In some cases, the changes made can account for more than 60 percent of vulnerabilities found in devices.
That’s according to a paper “The Impact of Vendor Customizations of Android Security,” (.PDF) recently published by a group of computer science students at North Carolina State University with the help of Android researcher and NC State professor Xuxian Jiang.
The research is set to be presented at the 20th ACM Conference on Computer and Communications Security in Berlin later today.
In the study, researchers looked at 10 different Android smartphones (HTC One X, Galaxy Nexus S3, etc.) from five different vendors, two per vendor, one per generation (pre-2012 2.x build and post-2012 4.x build) and examined the security flaws that stemmed from each device’s customized setups.
Using a tool called the Security Evaluation Framework for Android (SEFA), the researchers looked at thousands of lines of code to determine each preloaded app’s provenance (who authored it), permission usage (how many permissions each app has) and its vulnerability distribution (could the app be compromised). The SEFA tool, developed by the researchers, basically looks at a phone’s firmware, compares it to Android’s stock Android Open Source Project (AOSP) code and helps detects vulnerabilities in apps.
Eighty-two percent of the apps scanned came customized by the vendor and 86 percent of those apps wound up being what the researchers called overprivileged, meaning they “unnecessarily request more Android permissions than they actually use.”
While plenty of apps leverage user permissions, the quintet’s research also looked for legitimate “real, actionable exploits” in phones. Between 65 percent and 85 percent of the vulnerabilities found on LG, Samsung and HTC phones came as a direct result of the vendors’ customizations. The group found a handful of broken security-critical permissions in apps that can send SMS messages on behalf of the user without their permission and divulge personal information.
For example, the researchers found a preloaded app in Samsung’s Galaxy S3 phone called Keystrong_misc. If compromised it can lead to a series of reflection attacks and go down a “dangerous path for performing a factory reset, thus erasing all user data on the device.”
Vulnerabilities in the LG Optimus P880, another phone the researchers analyzed could lead to a device reboot and expose access to several mailbox tables.
Xuxian and his students claim they attempted to contact the corresponding phone vendors and while some have confirmed the vulnerabilities others have still not spoken to them “after several months,” according to the paper.
Perhaps the most troubling trend in the study and something that may beckon a change in the way vendors pre-load their devices in the future is that there really wasn’t much of a difference in the amount of vulnerabilities from one generation of phones to the next.
“Vendor apps consistently exhibited permission overprivilege, regardless of generation,” reads one part of the paper.
While technically the number of vulnerabilities and overprivileged apps (see right) decreased from pre-2012 phones to post-2012 phones, patterns were stable over time, suggesting “the need for heightened focus on security by the smartphone industry,” according to Xuxian and company.
The problem is since Android is such a massively popular open source platform, Google makes it, distributes it as the AOSP and manufacturers and carriers are free to tweak it as they see fit. This leads to a completely varied product complete with third-party apps and meaningless bloatware that in the end resembles a splintered version of the original software.
Attackers exploiting the Microsoft Windows and Office zero day revealed yesterday are using an exploit that includes a malicious RAR file as well as a fake Office document as the lure, and are installing a wide variety of malicious components on newly infected systems. The attacks seen thus far are mainly centered in Pakistan.
The CVE-2013-3906 vulnerability, disclosed Tuesday by Microsoft, is a remote code execution flaw that involves the way that Windows and Office handle some TIFF files. Microsoft said that attackers who are able to exploit the bug would be able to run arbitrary code on compromised machines. In the targeted attacks seen by researchers so far, attackers are using ROP techniques to exploit the vulnerability and then installing a downloader that pulls down some additional components, including an Office document that is shown to the user as a distraction from what’s going on in the background.
Researchers at AlienVault analyzed the exploit and malware being used in the targeted attacks and found that once the attackers have compromised the machine, they also download a RAR file that includes components that calls back out to the command-and-control server and then downloads a number of malicious components. The malware installs a keylogger, a remote backdoor and a component that steals various files, including XLS, DOC, PPT and PDF files.
The CVE-2013-3906 vulnerability affects Windows Vista and Office 2003-2010 and Microsoft recommended that users running vulnerable versions install the FixIt tool they released Tuesday, which helps prevent exploitation. Installing the EMET toolkit also can protect users against attacks on this vulnerability.
Most of the IPs connecting to the C&Cs used in these attacks are coming from Pakistan, the AlienVault researchers said. Researchers at Kaspersky Lab analyzed the malware and its behavior and found some interesting behavior.
“This is not the first vulnerability in TIFF. The notorious CVE-2010-0188 (based on TIFF too) is widely used in PDF exploits even now. The new 0day uses malformed TIFF data included in Office documents in order to run a shellcode using heap spray and ROP techniques. We have already researched some shellcodes – they perform common actions (for shellcodes): search API functions, download and launch payload. We took a glance at a downloaded payload – backdoors and Trojan-spies. Our AEP technology prevents a launch of any executable file by exploited applications. In this case our AEP protected and continues protecting users too,” said Vyacheslav Zakorzhevsky, head of the vulnerability research group at Kaspersky.
Image from Flickr photos of Elliott Brown.
In a new report detailing the number and kind of requests for user information it’s gotten from various governments, Apple said it has never received a request for information under Section 215 of the USA PATROT Act and would likely fight one if it ever came. The company also disclosed that it has received between 1,000 and 2,000 requests for user data from the United States government since January, but it’s not clear how many of those requests it complied with because of the restrictions the U.S. government places on how companies can report this data.
Right now, companies such as Apple, Google and others that issue so-called transparency reports only are allowed to report the volume of requests they get in increments of 1,000. So Apple’s report shows that although it received 1,000-2,000 requests for user data so far in 2013, the number that it complied with is listed as 0-1,000. Apple, along with a number of other companies, including Google and Microsoft, have asked the government in recent months for permission to disclose more specific numbers of requests, including specific numbers of National Security Letters.
“At the time of this report, the U.S. government does not allow Apple to disclose, except in broad ranges, the number of national security orders, the number of accounts affected by the orders, or whether content, such as emails, was disclosed. We strongly oppose this gag order, and Apple has made the case for relief from these restrictions in meetings and discussions with the White House, the U.S. Attorney General, congressional leaders, and the courts. Despite our extensive efforts in this area, we do not yet have an agreement that we feel adequately addresses our customers’ right to know how often and under what circumstances we provide data to law enforcement agencies,” Apple officials said in the report.
As the information regarding the surveillance methods and capabilities of the NSA has piled up in the last few months, many tech companies have become more vocal in discussing the requests they get from government agencies and law enforcement. Google, Yahoo, Microsoft and Apple have found themselves defending their practices and trying to reassure users that they don’t provide direct access to their servers or data links for law enforcement. Although the government has placed restrictions on how much these companies can reveal about the volume and kind of requests they get, Apple included one specific line in its transparency report that goes about as far as is permissible right now.
“Apple has never received an order under Section 215 of the USA Patriot Act. We would expect to challenge such an order if served on us,” the report says.
Section 215 is the bit that’s used by the NSA to collect business records such as phone call metadata.
The report also shows data on how many requests Apple has gotten from dozens of other governments, with the highest number being 127 from the U.K. Apple turned over some data in 37 percent of those requests. The next-highest volume of requests came from Spain, which issued 102, in 22 percent of which Apple handed over some user data.
Image from Flickr photos of MrGuyTsur.
Dennis Fisher talks with researcher Dragos Ruiu about his years-long struggle with a group of attackers who have infiltrated his network and are using malware that seems to resist all removal attempts and may have the ability to communicate using sound.http://threatpost.com/files/2013/11/digital_underground_132.mp3
*Dragos image via Gohsuke Takama‘s Flickr photostream, Creative Commons
An Android banking Trojan known as Svpeng has added phishing capabilities to its arsenal, and researchers have spotted it attacking Russian banking clients in what is perceived to be a dry run before it is adapted for other countries.
“Typically, however, cybercriminals first test-run a technology on the Russian sector of the Internet and then roll it out globally, attacking users in other countries,” said Kaspersky Lab researcher Roman Unuchek on the Securelist blog today.
Unuchek said the Trojan, which spreads via SMS spam messages, has new code that checks the language version of the operating system on the victim’s machine in order to tailor its messaging in the proper language. For now, the malware appears to be interested in U.S., German, Belarusian and Ukrainian victims.
Phishing is the big innovation for Svpeng, also known as Trojan-SMS.AndroidOS.Svpeng. Android users in Russia who are infected will be presented with a phishing window upon launching their banking application. The window asks for the victim’s user name and password which is then sent to a centralized server belonging to the attacker.
Unuchek also said the Trojan tries to steal bank card information by layering a phishing window over Google Play when it’s running on the user’s mobile device. The window prompts the user to enter his credit card or bank card information including expiration data and CVC number, which is also gift-wrapped to the attacker’s command and control server.
The malware is also capable of issuing commands to transfer money from the victim’s account to the attacker. Unuchek said it does so by sending SMS messages to numbers belonging to a pair of Russian banks.
“This way it checks if the cards of these banks are attached to the number of the infected phone, finds out the balance and sends it to the malicious C&C server,” Unuchek wrote. “If the phone is attached to a bank card, commands may arrive from the C&C to transfer money from the user’s bank account to his/her mobile account or to the cybercriminals’ bank account. The cybercriminals may then send this money to their digital wallet and cash it in.”
Svpeng may soon break out beyond the Russian borders; Kaspersky researchers have spotted new behavior in the malware, starting the adaptations based on location.
Unuchek said there have been 50 modifications to Svpeng in the three months the malware has been monitored. The attackers are also adamant about keeping the Trojan active; it uses the deviceAdmin Android tool to prevent security products from deleting it. It also prevents the user from disabling deviceAdmin or a factory reset by exploiting a previously unknown vulnerability in Android, Unuchek said.
Microsoft is warning users about targeted attacks against a new vulnerability in several versions of Windows and Office that could allow an attacker to take over a user’s machine. The bug, which is not yet patched, is being used as part of targeted attacks with malicious email attachments, mainly in the Middle East and Asia.
In the absence of a patch, Microsoft has released a FixIt tool for the vulnerability, which prevents exploits against the vulnerability from working. The bug affects Windows Vista, Windows Server 2008 and Microsoft Office 2003 through 2010.
“The exploit requires user interaction as the attack is disguised as an email requesting potential targets to open a specially crafted Word attachment. If the attachment is opened or previewed, it attempts to exploit the vulnerability using a malformed graphics image embedded in the document. An attacker who successfully exploited the vulnerability could gain the same user rights as the logged on user,” the Microsoft advisory says.
The vulnerability doesn’t affect the current versions of Windows, the company said, and users who are running potentially vulnerable products can take a couple of actions in order to protect themselves. Installing the FixIt tool will help prevent exploitation, as will deploying the Enhanced Mitigation Experience Toolkit (EMET), which helps mitigate exploits against certain classes of bugs.
“The vulnerability is a remote code execution vulnerability that exists in the way affected components handle specially crafted TIFF images. An attacker could exploit this vulnerability by convincing a user to preview or open a specially crafted email message, open a specially crafted file, or browse specially crafted web content. An attacker who successfully exploited the vulnerability could gain the same user rights as the current user. Users whose accounts are configured to have fewer user rights on the system could be less impacted than users who operate with administrative user rights<’ Microsoft officials said.
Buying Twitter followers is standard practice for celebrities, politicians, startups, and even so-called social media experts who want to boost their online Q Score.
So it shouldn’t be surprising that hackers have noticed this market opportunity and are building a formidable underground business automating the creation, and selling, of phony Twitter followers.
Fake Twitter accounts are nothing new, but the practice is being refined all the time. Rather than make up people, attackers are taking established Twitter users and duplicating their accounts. The authenticity of the phony accounts is crucial in order to keep these fake accounts live and keep Twitter’s fraud detection capabilities from catching them and turning off the accounts.
“They’re stealing names and appending numbers or letters to your name, copying your profile photo, your bio, your location and start sending out tweets,” said Paul Judge, vice president and chief research officer at Barracuda Networks. “They’re stealing identities and make fake accounts that let them blend in better and seem more credible. They send out these links and someone sees the name, sees the picture and believes it’s you. They’ve stolen trust in you and your reputation by sending out links.”
Barracuda has done noteworthy research on the Twitter underground in the past, and Judge says the evolution of the market is extraordinary, in particular noting that more than 60 percent of new fake accounts being created are using the tactic of duplicating legitimate existing accounts and get better click-through rates on the malicious links they send out.
“There are a few monetization techniques. They’re doing everything from links sending users to sites hosting Web exploit kits to sending links to spam sites hosting affiliate ads, or using the same accounts to sell you fake followers,” Judge said. “They’ve diversified income stream. We’re seeing the same fake account being used for all three.”
Right now, Barracuda research points out there are 52 eBay sellers soliciting phony Twitter followers at an average of $11 per 1,000 fake accounts. That is translating to more than 52,000 followers for each entity buying fake accounts, Barracuda said.
“They’re becoming so profitable in being able to sell these accounts as ‘Fake Followers,’ that the side effect is they’re able to make money without necessarily causing harm,” Judge said. “To some degree, it’s taking some of their attention away from spreading malicious links.”
While Judge said Barracuda doesn’t have good visibility into click-through rates, they do get an indication of profitability from the phony accounts that are used to sell fake followers.
“When you look at a fake account being used to sell itself as a follower, one simple measure of how much business they’re getting is how many accounts they are following; those are their customers,” Judge said. “One thing we’re able to do, for each army of fake accounts, we’re able to look at how many people they’re following, look at the number of unique people they’re following and gauge the level of business they’re having. For some of these, we’re able to see based on the amount they charge per follower, these businesses are generating $20,000 to $30,000 per month on the side of the business just selling fake followers.”
The entire operation is automated, from the quality of the websites they’re using (easy click-to-pay, slick designs) to the scripting that builds armies of fake followers.
“From the APIs Twitter provides, it’s so easy to script interactions with Twitter’s websites, it’s one of the things that made this grow so quickly,” Judge said. “The ease of which you can become a member and start tweeting, it’s a low barrier that makes it so easy for attackers to take advantage of it versus other social networks that are more complicated.”
Judge said more than 90 percent of the tweets are automated and sent through the Twitter website, which is actually a giveaway that something is amiss given that legitimate users send most of their tweets through mobile applications or third-party clients.
“Look at fake ones, there’s a much higher proportion through Twitter’s websites because it’s all scripted,” Judge said. “We’re also able to see different bursts during the day. You’ll often see an account that doesn’t tweet all day and then see minutes where there are tweets and then it disappears for the rest of the day.”
The problem for businesses and consumers, however is that social networks are often the first measure of a businesses or person’s reputation and trustworthiness. That’s what makes this such an appealing avenue for hackers to exploit.
“The disconnect is that the average person things that social media is a measure of popularity, when in reality, all you did was spend $11 for your followers.It’s the equivalent of buying a Zagat review or a five-star rating,” Judge said. “You’re buying accreditation.”