Threatpost for B2B
It turns out the best way to get people to pay attention to those malware warnings that pop up in browsers may be to stop tweaking them, scrap them entirely and rebuild from scratch. According to a study on the subject published last week, efficient malware warnings shouldn’t scare users away, they should give a clear and concise idea of what is happening and how much risk users are exposing themselves to.
It’s already well documented that the average computer user largely ignores the warnings, but new research is trying to determine just how browser architects and information technology specialists can create more effective warnings going forward.
Ross Anderson, the Head of Cryptography at Cambridge University and David Modic, a research associate at the school’s Computer Laboratory used psychology last year to find their answer. The duo’s research, a 31-page document “Reading This May Harm Your Computer: The Psychology of Malware Warnings,” was released Friday.
“We’re constantly bombarded with warnings designed to cover someone else’s back, but what sort of text should we put in a warning if we actually want the user to pay attention to it?” Anderson asked in a post on his blog Light Blue Touchpaper last week accompanying the study.
The biggest problem the researchers found regarding malware warnings is that everyday users would ignore them if they could. The two cite a handful of previous studies, including ones that look at the length, frequency, and technicality of warnings but point out that “daily exposure to an overwhelming amount of warnings” remains an issue.
People continue to have a hard time separating real threats from inconvenient, online warnings.
Anderson and Modic argue a way to fix the warnings is to change the narrative.
“There is a need for fewer but more effective malware warnings… particularly in browsers,” the paper reads, reasoning that the way certain warnings are worded is key to getting users to pay attention to them.
As part of their experiment, the researchers presented more than 500 men and women with variations of a Google Chrome warning, each one (See table, right) incorporated one of the following angles:
- Influence of authority
- Social influence
- Concrete threats
- Vague threats
Anderson described in his blog which condition gave the best results:
“What works best is to make the warning concrete; people ignore general warnings such as that a web page ‘might harm your computer,’ but do pay attention to a specific one such as that the page would ‘try to infect your computer with malware designed to steal your bank account and credit card details in order to defraud you.’”
On the whole, respondents heeded malware warnings regardless of what they said, but as Anderson and Modic expected, users heeded the warnings more so when they featured authority and concrete threat-based warnings.
“Warning text should include a clear and non-technical description of potential negative outcome or an informed direct warning given from a position of authority,” the researchers ultimately deduced.
Concrete threats – when individuals have a clear idea of what is happening and how much they are exposing themselves – wound up being the No. 1 predictor of click-through resistance.
The experiment found that authority – when the warnings come from trusted sources – was the No. 2 predictor. Trusted figures “elicit compliance” and in the study, can even extend to include Facebook friends.
“Respondents also indicated that they were more likely to click through [warnings] if their friends or Facebook friends told them it was safe to do. Facebook friends thus appear to have more sway on the decision to click through,” they said.
In some cases, these findings could be cause for concern, especially given the number of viral phishing campaigns that have leveraged Facebook over the past several years. Still though, Modic and Anderson give credence to social media, reasoning friends on Facebook may “carry more informative power than regular ones.”
Modic and Anderson made a handful of other observations from their experiment that tie into the idea of overhauling malware warnings.
Nine out of every 10 respondents kept their warnings turned on and only one out of every 10 claimed they wanted to turn theirs off, they were just unsure of how to do it. While none of this is exactly concerning, it does speak to a tiresome status quo. Users could be getting used to seeing the same, static malware warning.
As is to be expected, those more familiar with computers kept their warnings on, but those who did turn theirs off did so because they generally ignore malware warnings and requests from their computers in general.
“The inability to understand the warnings was another significant predictor of turning the malware warnings off. We might infer that the language in existing warnings is not as clear as it could be,” the study asserts.
The research calls back to a few similar studies of late, including one released last summer by Google’s Adrienne Porter Felt and UC Berkeley’s Devdatta Akhawe.
In that study, on the whole, users mostly paid attention to the warnings they saw, only clicking through malware and phishing warnings they saw 25 percent of the time.
Remember the age of text-based gaming where natural language phrasing would help you maneuver a character through scenes in a virtual world? In a gaming context, that has long been a dinosaur, replaced by intricate and massive online role-playing games. But researchers from Carleton University in Ottawa, Canada, have proposed a way to borrow from those narrative elements to someday build what they hope will be an alternative to passwords.
Their plan is to combine user- and machine-generated narrative, based on the user’s recent activity on a computer, where the user interacts accordingly as continuous authentication mechanism, authenticating to specialized systems. The researchers’ premise is that users are much more likely to remember a familiar or interesting narrative than a complex password.
“If we’re using systems to figure out who are closest friends are, or to provide us with our favorite restaurants or news updates, why can’t personal items be used for authentication as well,” said Carson Brown, one of the authors along with Anil Somayaji and David Mould of a paper entitled: “Towards Narrative Authentication; or Against Boring Authentication.” “Allow the system to have a dialogue and prove that you are you and tell it things you know. It’s a shared secret, but still part of your identity.”
Rather than relying on the user or computer to exclusively generate the narrative, the researchers believe this should be a collaborative effort, one that is derived from a user’s recent activity on the computer. For example, it could stem from playing new games, interacting with new applications, or check-ins on social media that could indicate a memorable activity such as a vacation that would spawn a new narrative.
“In practice, the dialog would probably involve highly constrained user choices at every stage, at least initially,” the researchers wrote. “Advances in natural language processing, however, might allow for more flexible collaborative story creation.”
Brown wrote in the paper, presented last September at the New Security Paradigms Workshop, that things humans find boring are not retained, while memories that are interesting stay with us. Passwords, in other words, are easily forgettable, and choosing to authenticate from good stories or pleasant memories keeps the user engaged and, the researchers hope, lessens the risk that attackers can steal credentials the way they can with today’s weak authentication schemes.
“Good stories are almost impossible to forget, and even bad stories can be remembered. …” the researchers wrote. “In fact, people often tell stories to verify each other’s identities by verifying that they both share some common set of stories, often using exchanges that are unintelligible to others who do not know those same stories. Further, those exchanges can be remarkably quick and concise.”
While computers’ understanding of narrative is poor, the researchers postulate that elements of a narrative such as places, objects, settings or characters can be converted via software to a form people would remember and computers could verify. This could take on a challenge-response format.
“The remote server should store a complex narrative structure—a story or a set of stories—that is then used to drive a dialogue with the user,” the researchers wrote. “The system sends challenges to the user that require knowledge of the stories to be successfully responded to but can be responded to using information derived from only a small portion of the narrative structure.”
The research paper provides an example of how narrative-based authentication would work from a text-based game called Stackers. In the game, the user is asked to stack a number of objects in a particular order in order to proceed, or in this case, to authenticate themselves. Sizes or colors could be added to the object to ward off brute-force or even replay attacks, the researchers wrote.
They say that your worst fears and your fondest dreams are rarely realized. That may well be true in most walks of life, but in the information security world, 2013 was the year that our worst fears were not only confirmed, but so were some things that few but the most paranoid among us thought possible.
The list of NSA-related revelations is well-known by now: the phone metadata collection program, PRISM, subversion of a random number generator in a NIST standard, development of an arsenal of capabilities to break SSL, tapping undersea fiber cables, monitoring the communications of foreign leaders and even assembling a catalog of information-warfare tools with outlandish capabilities. Some of these revelations involve capabilities or programs that people in the security industry have either suspected were in use or had some evidence were being used. The metadata program, for example, had been discussed in some corners of the industry for several years, as had the possibility of a backdoor in the Dual_EC DRBG random number generator.
The security and privacy implications of these programs, as well as the others that have been revealed by the leaks of documents from Edward Snowden, are obvious and devastating. Some of the fundamental technologies and platforms that billions of users rely on for their communications every day are continuously monitored. They should be considered compromised.
In many ways, the promise of the Internet as an open, usable communications platform available to everyone has been broken. For the network to be useful, its users must be able to place some level of trust in it, and the protocols and technologies on which it’s built. The revelations of the last seven months have made it clear that’s just not possible. The plain truth is that we no longer know what to trust.
That’s the cold, ugly lesson of 2013, that trust, the thing that’s needed in order for security and privacy to work, is not just difficult, but may be impossible in some cases. If you rely on encryption to protect your sensitive online communications, as many of us do, how can you trust that those packets you’re sending and receiving aren’t being diverted or decrypted somewhere? You can’t. If you prefer to be left alone and not have your every online movement, interaction and email tracked, you’re out of luck.
The Internet hasn’t been the open, flexible, user-oriented network it was meant to be for a long time–if it ever actually was. Now, it’s become a poisoned, paranoid environment where everything is suspect. The last year was a brutal one for privacy, freedom and security and it’s unclear whether 2014 or any of the coming years will be any better. Only the most optimistic bettor would make that wager and optimists seem to be an endangered species these days.
Security researchers from Malware Must Die uncovered new ransomware called PrisonLocker, and said the malware author is either a legitimate security researcher or is posing as one via a personal blog and Twitter handle.
Malware Must Die has monitored PrisonLocker’s development since spotting it for sale on an underground criminal hacking forum in November. The ransomware, also known as PowerLocker, is all-but ready for sale. At the moment, it appears to lack a completed graphical user interface and is still undergoing quality assurance tests. Once it’s ready, the creator claims he will sell the malware for roughly $100 per license, which can be paid using cryptocurrency Bitcoin.
According to specifications listed by the author in a number of locations, the PrisonLocker infection process will begin with a Trojan that drops a single executable file into a temp folder. Following successful installation, PrisonLocker is designed to encrypt nearly every file on infected machines, including those on hard drives and shared drives but excluding .exe, .dll, .sys, and other system files. According to a Pastebin post from Dec. 19, PrisonLocker will deploy the Blowfish cipher, and each infected machine will have a corresponding Blowfish decryption key that is encrypted using RSA AES 2048-bit encryption.
Other features include persistence through Windows registry keys, disabling infected users’ Windows and escape buttons, and blocking task manager, command prompt, registry editor, and other Windows utilities.
Like CryptoLocker, infected users will be given a predetermined amount of time to pay the ransom before the decryption key is forever deleted. Whoever administers the ransomware will have the ability to choose the preset amount of time and pause or reset this deletion clock in order to examine ransom payments. Other customizable features include naming and placing the infection file, determining the ransom amount and method of payment, and the establishing the username and password for the administrative panel, which is set as “admin” and “admin” by default.
PrisonLocker also boasts a number of analysis prevention features. Its author claims it detects basic virtual machine, sandbox, and debugger environments. The malware will also set up what its creator calls a “locked window in a new desktop.” This, the creator claims, will render useless the “alt+tab” command and, thus, all other applications. Beyond that, even if a user manages to escape the locked window, PrisonLocker includes a module that forces the locked window to the forefront of the user’s desktop every few milliseconds.
Interestingly, the ICQ messaging ID and email address associated with the malware author’s handle (gyx) on a number of sites is also associated with the twitter handle @Wenhsl and the security blog Wenhsl[.]blogspot[.]com. In that Twitter profile’s bio, the user describes himself as the following:
“Security enthusiast. Novice infosec/malware researcher and cybercrime analyst. C/C++ and currently polishing up my MASM.”
PrisonLocker is written in C++. Malware Must Die suggests that the author may either be double dipping as a security researcher and a criminal, or merely pretending to be a benevolent security researcher to cover his tracks as a criminal. Malware Must Die contacted various law enforcement agencies and provided this information to them.
The race to replace the Blackhole Exploit Kit as the web exploit pack of choice for cybercriminals seems to have an early leader in Magnitude.
Researchers at Dutch security firm Fox-IT reported over the weekend that European visitors to Yahoo were falling victim to malicious ads hosted on the site. The ads were injecting iframes onto the user’s browser and redirecting them to sites hosting Magnitude.
This is the first known major incursion redirecting to Magnitude since the takedown of Blackhole and the arrest of its alleged creator Paunch in October.
The Magnitude exploit kit targets Java vulnerabilities and installs a number of dangerous Trojans, including Zeus, Dorkbot, Necurs and a number of click-fraud malware. Fox-IT’s investigation concluded the infections started Dec. 30, possibly earlier.
Most of the victims are in Romania, Great Briatain and France; Fox-IT said it was monitoring an average of 300,000 visits per hour to Yahoo and based on an estimated infection rate of 9 percent, the company says about 27,000 infections were happening per hour.
“At this time, it’s unclear why those countries are most affected,” the company wrote on its blog. “It is likely due to the configuration of the malicious advertisements on Yahoo.”
The Washington Post reported, meanwhile, that Yahoo has removed the advertisements in question.
“Users in North America, Asia Pacific and Latin America were not served these advertisements and were not affected,” a Yahoo representative told the Post. “Additionally, users using Macs and mobile devices were not affected.”
The malicious ads were served by Yahoo from a number of domains, including two registered on Jan. 1: blistartoncom[.]org and slaptonitkons[.]net. The company advises that concerned organizations should block the 192.133.137 and 193.169.245 subnets.Those domains then redirect to a number of domains hosting Magnitude, including boxdiscussing[.]net, crisisreverse[.]net, and limitingbeyond[.]net. All of the domains, Fox-IT said, were served from a single Dutch IP address 193[.]169[.]245[.]78.
“It is unclear which specific group is behind this attack, but the attackers are clearly financially motivated and seem to offer services to other actors,” Fox-IT said, adding that Magnitude is similar to an exploit kit used in an October compromise of php.net.
Since the takedown of the Blackhole Exploit Kit shortly following the arrest of its alleged creator Paunch in Russia, cybercriminals have yet to settle on an adequate successor. The hodgepodge of exploits kits in circulation, including Magnitude, Cool, Angler, Neutrino and others, don’t have the same muscle as Blackhole. Blackhole not only was a complete catalog of webinjects and banking malware, but it was updated almost daily, and was relatively affordable with an annual license selling for around $1,500. Since Paunch’s arrest, activity from Blackhole and its cousin Cool has dwindled to almost zero, and attackers are scrambling not only for a successor, but also to recover lost revenue.
Recently, researchers at Websense reported that the keepers of the Cutwail botnet had resorted to using phishing and spam email schemes spiked with malicious attachments or links to malware downloads because of the unavailability of Blackhole. Prior, there was a heavy use of Blackhole to automatically compromise computers and install banking Trojans or other financial malware, and to a lesser extent, direct attachments. That ratio has flipped, Websense said.
“What we’ve seen post Blackhole is this immediate cutoff where the URL based attacks inside these emails declined because of the Blackhole infrastructure going down,” said Alex Watson, Websense director of security research.
As for Magnitude, Websense reported a blip where criminals were experimenting with the new exploit kit for a period of time, but then moved away. Magnitude and Neutrino, a number of researchers report, support many of the most recent exploits, but they seem to be a work in progress in terms of how they deliver redirects or exploits.
“It has to be a worthwhile business arrangement as well. When they adopt exploit kits, it’s both a mixture of the frequency of adoption to avoid security solutions and another element how quickly it is to incorporate the latest exploits,” Watson said. “The third is the cost of the business arrangement for the exploit kit and if it can be competitive with what Blackhole was before.”
UPDATE–Bruce Schneier, the famed cryptographer and author who recently left his longtime post at BT, has taken a new position as CTO of Co3 Systems, a startup that provides incident response systems. Schneier, a central figure in the security industry for more than two decades, said he is excited about the new challenge ahead.
Schneier left BT last month after spending nearly 15 years at Counterpane, which he helped found, and BT, which acquired the company in 2006. Counterpane was part of the first wave of managed security services providers in the late 1990s and provided monitoring and detection services for its customers. He said that the incident response system that C03 now provides may have been something that Counterpane could have put together had the company not been acquired. In joining Co3, Schneier rejoins one of the other members of Counterpane’s original executive team, John Bruce, who is CEO of Co3.
Schneier said that he sees a lot of need for the service that Co3 provides, especially in today’s environment where breaches are a daily occurrence and every organization is a target.
“Years ago, attacks were both less frequent and less serious, and compliance requirements were more modest. But today, companies get breached all the time, and regulatory requirements are complicated — and getting more so all the time. Ad hoc incident response isn’t enough anymore. There are lots of things you need to do when you’re attacked, both to secure your network from the attackers and to secure your company from litigation,” Schneier said on his blog.
“The problem with any emergency response plan is that you only need it in an emergency. Emergencies are both complicated and stressful, and it’s easy for things to fall through the cracks. It’s critical to have something — a system, a checklist, even a person — that tracks everything and makes sure that everything that has to get done is.”
Schneier said that he had been getting a little restless in the last year and was looking around for something interesting to do. He had enjoyed working in a startup environment at Counterpane and had been on the advisory board at Co3 Systems for a while, so the pieces fit together easily.
“I was getting a little antsy bored but then I thought, do I want to work for a company? But I know the people here and I like the product and it fits with my philosophy,” he said in an interview. “It’s peripheral enough to what I do that it doesn’t raise any questions. If I went to work for a hard-disk encryption company then immediately the NSA question comes up. I’m not going to stop doing what I’m doing. This is a company where there’s no quandaries.”
In recent months, Schneier, who is best-known for his cryptography work and his books on information and physical security, has been working with journalists at The Guardian to help analyze some of the NSA documents leaked by Edward Snowden. He also is currently serving as a fellow at The Berkman Center at Harvard University.
“I’m pretty excited about this. It’s good to be back at a startup. Plus, John Bruce and I worked together at Counterpane…so we both know exactly what we’re getting ourselves into,” Schneier said by email.
The work that he has done on the Snowden documents will continue, Schneier said, because he views it as more important than any given job. He will be working on the documents with Glenn Greenwald at his new media venture.
“None of that stops. That’s a rule with any company. Given the choice, the job loses,” he said. “I mean, what’s more important?”
*Image via Stiftelsen‘s Flickr photostream, Creative Commons
While much of the coverage of the surveillance programs revealed by Edward Snowden have focused on the legality and constitutionality of the collection of metadata and Internet traffic in the name of counter-terrorism and national security, the question of whether these programs are actually cost effective has gone largely unexamined. But a pair of academic researchers decided to have a look at whether the NSA–and by extension, the American people–is getting anything worthwhile for the untold millions spent on the metadata program. Their conclusion: probably not.
The metadata program, which was the first surveillance system revealed by Snowden in June, is authorized under Section 215 of the USA PATRIOT Act and enables the NSA to collect and store phone call records under blanket court orders. The agency can store these records for five years, and they include information such as the originating and terminating phone numbers and the length of each call; they don’t include call content. Administration and intelligence officials have said in the wake of the Snowden leaks that collecting this data enables them to “connect the dots” among various disparate pieces of intelligence and suspects in order to conduct terrorist investigations. They also have argued that the disclosure of the Section 215 surveillance program and others in recent months have caused serious damage to American intelligence capabilities.
However, as the authors of the new paper, John Mueller, an adjunct political science professor at Ohio State University, and Mark G. Stewart, Professor and Director, Centre for Infrastructure Performance and Reliability at The University of Newcastle in Australia, note, terrorists have known for decades that the NSA is listening to their electronic communications. The current set of revelations hasn’t given them significantly more information, they argue.
“It is possible that the current revelations will impress the terrorists even further about the extent of the surveillance effort. But even if that is so, the effect would mainly be to make their efforts to communicate even more difficult and inconvenient,” the write in their paper, which was produced for the journal I/S.
“Conceivably, as some maintain, there still exist some exceptionally dim-witted terrorists or would-be terrorists who are oblivious to the fact that their communications are rather less than fully secure. But such supreme knuckle-heads are surely likely to make so many mistakes—like advertising on Facebook or searching there or in chatrooms for co-conspirators—that sophisticated and costly communications data banks are scarcely needed to track them down.”
In their paper, Mueller and Stewart try to determine what the cost of the metadata collection program might be, not just in monetary terms, but also in terms of other lost opportunities and damage to privacy. The budget for the program is classified, but the authors say that the direct costs of it could be relatively low. They caution, however, that the dollar figure the NSA spends on the program isn’t the only one that matters. There is also the cost of following up on whatever leads the metadata program generates, as well as the privacy cost to citizens whose records end up in the database, something that’s difficult to quantify.
Mueller and Stewart also are concerned with the effectiveness of the metadata program, and look closely at the infamous group of 54 terrorist incidents or plots that NSA Director Keith Alexander has cited as being identified or disrupted through the use of the Section 215 surveillance. The list of incidents itself is classified, but NSA officials have testified that 90 percent of them were identified using section 702 surveillance, which is the authority for the so-called PRISM program that collects Internet traffic.
“Thus, the 215 program, in which metadata are accumulated and stored for all telephone calls within the United States, presumably played a role only in around 5 cases over the course of the program. According to General Alexander, only 13 of the 54 cases on the classified list had a ‘homeland nexus,’ the others having occurred in Europe (25), in Asia (11), and in Africa (5),” the paper says.
“Four of the cases, all presumably included in the ‘homeland nexus’ subset, were publicly discussed in Congressional testimony on June 18, 2013, by Alexander and by Sean Joyce, Deputy Director of the FBI. Insofar as NSA surveillance played a role at all in these cases, it seems that it was the 702 program, not the 215 one, that was relevant.”
That one case, the authors say, appears to be one that involved a Somali cab driver living in San Diego who had sent some money to a group in his native country that was fighting Ethiopia. They authors cite comments from Sen. Patrick Leahy that say the cases described by Alexander “weren’t all plots and they weren’t all disrupted.”
“Absent such information, and keeping in mind the impressive record of dissembling that NSA has so far amassed, it does seem to be a reasonable suspicion—supported by the public comments of Senator Leahy—that the four cases discussed represent not a random selection from the list, but the best they could come up with. It that it so, the achievements of 215 do seem to be decidedly underwhelming,” the authors say.
Mueller and Stewart conclude that in order for the metadata program to be cost-effective, the price tag would need to be quite low.
“Although the cost of the 215 program remains classified, it is possible to calculate how much that cost would have to be for the program to be cost-effective. Even making some generous assumptions about its effectiveness, the program would be cost-effective only if its full price tag (including all the cost considerations arrayed above) is less than $33.3 million per year. The full NSA budget, for reference, is about $10 billion,” they conclude.
“It seems likely that ‘on net’ (as the President puts it) the highly-controversial 215 program could also safely be retired for ‘operational and resource reasons’ with little or no negative consequences to security…”
Image from Flickr photos of Christopher Brown.
The OpenSSL Project blames a weak password used at its hosting provider for its recent site defacement.
The organization that hosts the ubiquitous open source encryption implementation updated a notice on its website yesterday informing users that attackers used the weak credential to gain control of a hypervisor management console. The update says the OpenSSL server is a virtual server sharing a hypervisor with other customers at its service provider.
The attackers were able to get in on Dec. 29 and manipulate the organization’s virtual server, the notice said.
“Other than the modification to the index.html page, no changes to the website were made. No vulnerability in the OS or OpenSSL applications was used to perform this defacement,” the notice said, adding that the source repositories had been audited and were not accessed.
VMware yesterday denied reports that its software had been compromised as part of the OpenSSL defacement.
“We have no reason to believe that the OpenSSL website defacement is a result of a security vulnerability in any VMware products and that the defacement is a result of an operational security error,” the company said in a statement.
Hypervisors are software programs used to create and manage virtual machines; hosting providers can use them to manage multiple machines on a single host.
OpenSSL is more than a TLS or SSL implementation; it’s also a full cryptographic library that is at the core of numerous commercial software products that make use of encryption.
An attack on OpenSSL, where hackers would be able to access source code and inject backdoors or other malware, could have devastating consequences. Speculation has been high too that the NSA would covet a backdoor in OpenSSL given its presence in any number of high profile products and web applications; the list of FIPS Cryptographic Module Validation Program-certified products, for example, is lengthy and target rich featuring hundreds of security and networking products.
A Turkish hacking group claimed responsibility for the defacement. TurkGuvenligi took down the webpage and left behind the message: “TurkGuvenligiTurkSec Was Here @turkguvenligi + we love openssl _.”
The SANS Institute’s Internet Storm Center reports a surge in probes against port 32764, which matches the port used by an alleged backdoor in Linksys routers that was reported over the New Year’s Day holiday.
“At this point, I urge everybody to scan their networks for devices listening on port 32764/TCP. If you use a Linksys router, try to scan its public IP address from outside your network,” wrote SANS CTO Johannes Ullrich.
Ullrich said there was relatively little scanning activity on that port prior to Thursday when three source IPs began conducting probes on that port, scanning as of this morning close to 20,000 records against more than 4,000 targets.
Most of the probes are coming from one of the three source IP addresses in question, as well as from the Shodan search engine.
The alleged backdoor was disclosed in a Github post by a hacker Eloi Vanderbeken of France. He uploaded a PowerPoint presentation to Github describing the backdoor he found not only in five different Linksys DSL modem/routers, but also in a number of Netgear, Cisco and SerComm home and business boxes.
“I didn’t want to lose my time in writing a full report, it’s a very simple backdoor that really doesn’t deserve more than some crappy slides,” Vanderbeken wrote.
His slides describe his journey over Christmas to regain access to his home router’s admin console after losing what he describes as a very long and complex password. He began by conducting an Nmap scan where he found the router listening and responding over 32764 to a number of commands. After finding and downloading the firmware for his Linksys gear and reverse engineering its MIPS binary code, he found he could exploit a buffer overflow and cause the router to revert to its default settings.
Vanderbeken was then able to use this opening to get a command shell and write a script that gave him administrator access to the router.
It’s unclear from his Github entry whether any of the hardware manufacturers were notified of the weakness.
Researchers, meanwhile, spent a good amount of time last year looking at the security home and small office networking gear and found a number of serious issues. Cisco Linksys EA2700 boxes were found to be vulnerable to cross-site scripting attacks, file-path traversal attacks, cross-site request forgery and even a potential source code disclosure, according to pen-tenster Phil Purviance, who reported his findings to Cisco last March.
Prior to that, IOActive researchers Sofiane Talmat and Ehab Hussein shared research that demonstrated that home routers and modems from ISPs can be chained together to redirect traffic in click-fraud scams, keep blocks of users from reaching the Internet, or launch denial of service attacks. Talmat and Hussein were also able to take advantage of vulnerable firmware and upload their own in simulated attacks. Their new firmware took the place of factory-installed firmware, rendering factory-reset options useless.
A group of hundreds of academics from countries around the world have started a petition that demands Western governments, such as those in the United States and UK, stop the mass surveillance programs they have in place and “effectively protect everyone’s fundamental rights and freedoms”.
The petition is the latest public effort from various groups of security and privacy researchers, Internet pioneers and academics who are concerned about the effects of mass surveillance on the security of the Internet and the privacy of users’ communications. Signed by academics from more than two dozen countries, the petition calls on intelligence agencies to end blanket surveillance and become subject to greater oversight and transparency.
“Intelligence agencies monitor people’s Internet use, obtain their phone calls, email messages, Facebook entries, financial details, and much more. Agencies have also gathered personal information by accessing the internal data flows of firms such as Google and Yahoo. Skype calls are “readily available” for interception. Agencies have purposefully weakened encryption standards – the same techniques that should protect our online banking and our medical files. These are just a few examples from recent press reports. In sum: the world is under an unprecedented level of surveillance,” the petition says. “This has to stop.”
Many of the signatories of the petition have spoken individually about the mass surveillance programs run by NSA and the GCHQ in the UK, and have been critical of the programs, such as the metadata collection program that pulls in hundreds of millions of phone records a day and the tapping of undersea Internet cable to collect raw traffic flowing on networks owned by major providers. Bruce Schneier, a cryptographer and author, who has also been involved in some of the publication efforts of the NSA leaked documents in The Guardian, signed the petition, as did Ross Anderson of the University of Cambridge, Alessandro Acquisti of Carnegie Mellon University, Marc Rotenberg of Georgetown University and Jay Rosen of New York University.
“Without privacy people cannot freely express their opinions or seek and receive information. Moreover, mass surveillance turns the presumption of innocence into a presumption of guilt. Nobody denies the importance of protecting national security, public safety, or the detection of crime. But current secret and unfettered surveillance practices violate fundamental rights and the rule of law, and undermine democracy,” the petition says.
“The signatories of this declaration call upon nation states to take action. Intelligence agencies must be subjected to transparency and accountability. People must be free from blanket mass surveillance conducted by intelligence agencies from their own or foreign countries. States must effectively protect everyone’s fundamental rights and freedoms, and particularly everyone’s privacy.”
The petition, titled Academics Against Mass Surveillance, comes at a time when there is a huge amount of public scrutiny of the NSA’s collection methods and programs. Since the leaks from former NSA contractor Edward Snowden began in June, security researchers, lawmakers and privacy advocates have called for greater oversight and reform of the agency’s collection methods. There are a number of pending lawsuits against the NSA and some of the companies involved in the collection programs, as well as legal challenges of the programs from groups such as the ACLU and the EFF.
Image from Flickr photos of Tim Gillin.
Dennis Fisher and Mike Mimoso talk about the year that was in the security industry, including the last six months of NSA drama, the Microsoft bug bounty program, exploit sales and attacks against major banks.http://threatpost.com/files/2014/01/digital_underground_139.mp3
It didn’t take long for hackers to exploit a previously disclosed vulnerability in the popular photo sharing application Snapchat. As yet unidentified hackers spent yesterday’s New Year’s holiday dumping 4.6 million of the service’s usernames and partial phone numbers and posting them online for the public to peruse.
The site that was hosting the slew of information, SnapchatDB.info, remains offline this afternoon. In its place a note from the site’s hosting company acknowledges the account corresponding to the site has been suspended.
For a short time yesterday the site allowed anyone to download all of the leaked data as either a SQL dump or CSV text file.
The hackers responsible for disclosing the information claim they omitted the last two digits of the leaked phone numbers to “minimize spam and abuse” but encouraged interested parties to contact them for the full database.
“Feel free to contact us to ask for the uncensored database. Under certain circumstances, we may agree to release it,” read one part of the site, which has been since cached on Google.
Information about the site is sparse but according to whois.domaintools.com, someone whose address and phone number can be traced to Panama registered the site on New Year’s Eve.
It isn’t clear if the leaked information is legitimate but the fact that the site was taken offline so fast suggests there may have been some validity to the hack and that that due to the sensitive nature of the data, the company may have had it removed.
Representatives from the company failed to immediately respond to a request for comment Thursday.
News of the hack spread first on YCombinator’s Hacker News site. From there some sleuths on Reddit were able to comb through the millions of phone numbers to deduce that the average Snapchat user has a better chance of not being on the list than being on it.
Based on the leaked telephone area codes, if the phone number attached to a Snapchat account is based in one of the following states, the account’s information likely isn’t in the database:
- New Hampshire
- New Mexico
- North Carolina
- North Dakota
- Rhode Island
- West Virginia
The leaked phone numbers appear to be largely contained to North America and includes users from major cities across the United States (Los Angeles, Chicago, Denver, etc.) and some remote parts of Northern Canada.
Researchers at Gibson Security warned about the bug in a full disclosure post on their site Christmas Eve claiming it was “ridiculously easy” to use Snapchat’s API to match its users’ phone numbers with usernames on a massive scale. According to the researchers, despite disclosing the bug to the company in August, Snapchat hadn’t made any moves to fix the issue in the last five months.
Snapchat went as far as to dismiss Gibson Security’s claims in a blog post last Friday, claiming the company doesn’t display phone numbers to other users and doesn’t support the ability to look up phone numbers by username. The group tried to quell fears by claiming they’ve “implemented counter-measures and continue to make improvements to combat spam and abuse.”
What could prove to be quite the blunder for Snapchat is that the company may actually helped the hackers by suggesting how to create a database like the one that was leaked in the same blog post.
“Theoretically, if someone were able to upload a huge set of phone numbers, like every number in an area code, or every possible number in the U.S., they could create a database if the results and match usernames to phone numbers that way,” warned the post.
It appears the hackers were able to do just that, just on a lesser scale.
It’s not yet certain what percentage of Snapchat’s users may have been put at risk by the hack. The app was used by more than 8 million U.S. users in May 2013 according to data provided by Nielsen this past summer but it’s almost positive that figure has jumped since, especially in wake of the app’s increased popularity.
As a service to anyone who might be worried their account information is out there, Gibson have put together a searchable, web-based tool that allows users to verify whether or not their data has been leaked.
Target Corp.’s admission that encrypted PIN data was stolen in the Black Friday breach was bad news for consumers. For security experts, especially cryptographers, particular exception was taken to the retail giant’s use of Triple DES (3DES) encryption to keep the PIN data safe.
With all crypto suffering scrutiny under the weight of the Snowden leaks, security experts are extra leery of 3DES because of its age and the availability of cryptographically stronger options such as AES.
Target insists the PIN data is safe because the numbers were encrypted at physical retail locations on the PIN pad, and the key is not stored with the data. Instead, the key is with the company’s payment processors, one of which, First Data, said it is not aware of any breaches or abuse on its end.
“What this means is that the ‘key’ necessary to decrypt that data has never existed within Target’s system and could not have been taken during this incident,” Target spokesperson Molly Snyder said.
Matthew Green, a noted cryptographer and professor at Johns Hopkins University, said the PIN data is likely secure if Target is being forthright. Hackers cannot decrypt the PIN data without the key, or access to the machine storing the key.
“Most people object to 3DES because it’s an ancient algorithm that was designed as a patch for (now broken) DES until AES was finalized,” Green said via email. “Now we’ve had AES for more than a decade, it’s questionable why we’d be using 3DES.”
Assuming too that Target is compliant with the Payment Card Industry Data Security Standard (PCI-DSS), the mandates there require unique keys for every payment terminal, limiting the scale of risk brought by the breach, which resulted in 40 million debit and credit card numbers being stolen. The attackers, Green wrote in a blog post, would have to hit every terminal to have all the PIN data, or hack the processors.
PCI also requires four-digit PINs to be padded to add complexity to the data being encrypted. Four-digit PINs are child’s play for a brute force attack since there are only 10,000 possible combinations. Padding and salting the PIN data raises the cost of decrypting the data for an attacker. These techniques require using part of the credit card number as part of the key encrypting the PIN.
“Done this way, every PIN number now decrypts to a different value. If they did this, then it would indeed be the same as if no PIN information were stolen at all,” wrote Robert Graham, a researcher with Errata Security.
Green, meanwhile, described a number of possible encryption formats for PIN data. One involves the use of the XOR cipher on the PIN data with the last 12 digits of the card number, and encrypting the rest using 3DES in ECB mode. Another involves stringing the PIN with a transaction number that is then encrypted using 3DES in ECB mode. The final format involves padding random bytes onto the PIN and then encrypting. All three methods, Green said, prevent two users with the same PIN having their data encrypt to the same value under the same key.
“ECB mode has many flaws, but one nice feature is that the encryption of two different values (even under the same key) should lead to effectively unrelated ciphertexts,” Green wrote. “This means that even an attacker who learns the user’s PAN shouldn’t be able to decompose the encrypted PIN without knowledge of the key.”
All that said, the derision levied against 3DES was intense for days after Target’s announcement. Green noted that two-key 3DES will be banned for FIPS-certified products after next year because the 112-bit key was too short; three-key is 168 bits and is FIPS approved.
Green added that some impractical attacks are possible against 3DES, largely because its block size is 64 bits long, something that 128-bit AES eliminates.
“There are some impractical attacks on 3DES that dramatically reduce its key strength,” Green said. “However these are way too expensive to use in practice, and they only reduce the key strength to a level that’s still pretty large (168 down to 112 bit).”
The Syrian Electronic Army took advantage of the relative calm of New Year’s Day to make a loud statement about the NSA’s surveillance program and Microsoft’s alleged participation in it. The group compromised the Twitter account and blog of Microsoft’s Skype service and posted anti-surveillance messages on both, which were later removed.
On Wednesday afternoon, the official Skype account on Twitter posted a message that accused Microsoft of monitoring users’ Hotmail and Outlook.com email traffic and selling it to the government.
“Don’t use Microsoft emails (hotmail,outlook), They are monitoring your accounts and selling the data to the governments,” the message said. It was later removed from the Skype Twitter feed. Shortly thereafter, Microsoft officials regained control of the account and apologized to users for the attack.
“You may have noticed our social media properties were targeted today. No user info was compromised. We’re sorry for the inconvenience,” the message said.
Microsoft, along with other tech players such as Google, Apple and Yahoo, have been implicated in some of the leaks from former NSA contractor Edward Snowden regarding the agency’s surveillance capabilities. Some of the leaks have suggested that those companies have provided direct access to their networks or services to the NSA, an allegation that all of the companies have denied. Officials from those companies have said that they only provide information when required by law or a court order.
The SEA attackers also posted a similar message on the official Skype blog. The SEA has claimed responsibility for a long list of attacks in the last couple of years, including compromises of the New York Times and the Washington Post. The group has specialized in compromising the social media accounts of a variety of old-school media organizations and often espouses messages in support of the Syrian government, which has been under fire.