More than 300,000 small office and home office routers, most in Europe and Asia, were compromised in a campaign that started in mid-December, continuing a rash of security incidents involving home and small business networking equipment.
Researchers at Team Cymru published a report today on the pharming attacks. The attackers are overwriting DNS settings on the devices and redirecting DNS requests to attacker-controlled sites via extensive man-in-the-middle attacks.
Routers from a number of manufacturers, including TP-Link, D-Link, Micronet, Tenda, and others, are involved and victims are concentrated in Vietnam, India, Italy and Thailand. Team Cymru said it notified the affected vendors, none of which responded to its outreach. In addition, the researchers said they had notified law enforcement.
The researchers identified the IP addresses involved: 5[.]45[.]75[.]11 and 5[.]45[.]75[.]36. Since the routers’ primary DNS IP addresses are overwritten in the attacks, the victims are susceptible to denial of service if the attackers’ servers are taken down, Team Cymru said.
The attacks were detected in January on several TP-Link routers redirecting victims to the two IP addresses; the TP-Link routers were not accessible via default passwords, Team Cymru said. Instead, the hackers exploited a cross-site request forgery vulnerability on the devices and a version of the ZyXEL ZynOS firmware that was vulnerable to attacks where a hacker would be able to download a saved configuration file that included admin credentials from a URL in the web interface that did not require authentication.
Team Cymru said it observed more than 300,000 unique IP addresses sending DNS requests to the attack servers, which were acting as open resolvers, thus responding to any external request.
The researchers said in the report that the campaigns are similar to the attacks against a number of banks in Poland recently, but are likely being conducted by separate hacker groups. Poland’s mBank was targeted by similar DNS redirection attacks, which attackers used to steal credentials for online accounts. In those attacks, SMS messages were sent to victims, enticing them to approve transfers to the attackers’ accounts. The IP addresses involved in the mBank attacks were 95[.]211[.]241[.]94 and 95[.]211[.]205[.]5. Unlike the latest router attacks, only 80 or so were observed by Team Cymru.
“The scale of this attack suggests a more traditional criminal intent, such as search result redirection, replacing advertisements, or installing drive-by downloads; all activities that need to be done on a large scale for profitability. The more manually-intensive bank account transfers seen in Poland would be difficult to conduct against such a large and geographically-disparate victim group,” the report said.
Attackers have been targeting home networking gear with relative success for a bit of time now. The most recent incident was the so-called Moon worm identified by the SANS Institute. Moon spread over Linksys home and SMB routers, exploiting a CGI script vulnerability that allowed it to spread over the HNAP protocol used in Cisco devices. It was unclear at the time whether there was a malicious payload, or what kind of command-and-control communication was happening.
“There are about 670 different IP ranges that it scans for other routers. They appear to all belong to different cable modem and DSL ISPs. They are distributed somewhat worldwide),” said SANS CTO Johannes Ullrich said. “We are still working on analysis what it exactly does. But so far, it looks like all it does is spread (which is why we call it a worm “It may have a ‘call-home’ feature that will report back when it infected new hosts.”
Schneider Electric Mitigates Vulnerabilities in OPC Factory Server and Floating License Manager Products
The Industrial Control Systems Cyber Emergency Response Team (ICS-CERT) last week issued advisories warning of serious vulnerabilities in Schneider Electric SCADA gear.
Schneider Electric is a supplier of energy management control products that are used in a number of critical industries in North America, including energy, water and wastewater, food, agriculture, and transportation systems.
The company reported two vulnerabilities that could allow attackers to execute malicious code. Product upgrades have been developed by Schneider Electric that mitigate the vulnerabilities.
The first affects the Schneider Electric OPC Factory Server, which provides an interface for client applications that require access to production data in real time. Versions TLXCDSUOFS33 – V3.35, TLXCDSTOFS33 – V3.35, TLXCDLUOFS33 – V3.35, TLXCDLTOFS33 – V3.35, and TLXCDLFOFS33 – V3.35 contain a buffer overflow vulnerability when a malicious configuration file is sampled, the ICS-CERT advisory said.
“When a malformed configuration file is parsed by the demonstration client, it may cause a buffer overflow allowing the configuration file to start malicious programs or execute code on the PC,” the advisory said.
The vulnerability is not remotely exploitable, keeping its severity score down.
“The exploit is only triggered when the demonstration client opens a specially modified sample client configuration file to execute malicious programs or execute code on the PC,” the advisory said.
The second vulnerability, an unquoted service path vulnerability, was found in the Schneider Electric Floating License Manager; it too cannot be exploited remotely, the ICS-CERT advisory said. Versions V1.0.0 through V1.4.0 which is used in five products from the company: Power Monitoring Expert; Struxurware process Expert; Struxureware process Expert libraries; Vijeo Citect (SCADA); and Vijeo Citect Historian.
“This vulnerability could allow attackers to start malicious programs as Windows services,” the advisory said. “When the executable path of a service contains blanks, attackers can exploit this to execute malicious programs.”
The advisory said the exploit is triggered only when a local user runs the vulnerable application and the path contains blanks; to mitigate, service paths in the registry must be surrounded with quotes, ICS-CERT said.
Schneider Electric products using the vulnerable license manager are automatically updated via the company’s update system.
Apple rarely offers anyone a glimpse inside its walled-off security garden. The last time it did was in the spring of 2012 when it released a detailed paper on the security of its iOS operating system for iPhones and iPads. The company also presented a much-anticipated if not anticlimactic presentation at the Black Hat Briefings that summer summarizing the high points of the paper.
Now, scant days after the revelation that a stunningly simple coding mistake led to emergency updates for iOS and OS X in order to close a cavernous SSL certificate-validation vulnerability, Apple has released an updated iOS security guide.
The 33-page document explains in a helpful degree of detail the inner workings of iOS system security, encryption and data protection capabilities, network security, device controls, application security and security around its Internet services such as iCloud, iCloud Keychain, iMessage and more.
The application security and Internet services sections are new additions to the paper. The services, in particular, are key hubs for iOS users’ data, not only as it’s shared via messaging applications and services, but at rest in storage services such as iCloud.
Rich Mogull, founder of consultancy Securosis, wrote in an analysis of the paper that this is the first time Apple has shared any level of noteworthy detail on iCloud and said the mechanisms deployed by Apple would make it difficult for intelligence agencies to intercept data such as iCloud Keychain passwords. Keychain allows users to sync passwords between devices running iOS and computers running OS X.
Apple said in the paper that iCloud Keychain and Keychain Recovery services are designed so that passwords are protected regardless if an account has been compromised, whether iCloud is compromised, or third parties have access to user accounts.
The paper describes an elaborate encryption scheme used by iCloud Keychain involving asymmetric cryptography in which the private key signs the public key, yet never leaves the user’s device.
“Each keychain item is sent only to each device that needs it, the item is encrypted so only that device can read it, and only one item at a time passes through iCloud,” Mogull wrote. “To read it, an attacker would need to compromise both the key of the receiving device and your iCloud password. Or re-architect the entire process without the user knowing. Even a malicious Apple employee would need to compromise the fundamental architecture of iCloud in multiple locations to access your keychain items surreptitiously.”
Apple also uses a secure infrastructure for keychain escrow allowing only authorized users and devices to recover a keychain. The escrow records are protected by clusters of hardware security modules, Apple said. Again, another complex series of events, protect the escrow record and ultimately keychain recovery.
The timing of the paper keeps the security of Apple devices in the news a bit longer. Apple’s release on iOS 7.06 late on the afternoon of Friday Feb. 21 corrected a coding mistake that removed SSL certificate checks from iOS—and later it was revealed to affect OS X as well. Attackers who successfully had pulled off a man-in-the-middle attack on the victim’s wireless network could intercept and read communication in clear text because of the goto fail bug as it has come to be known.
In an analysis of the vulnerability, Google expert Adam Langley said a server could send a valid certificate chain to the client and not have to sign the handshake. Langley summarized:
“This signature verification is checking the signature in a ServerKeyExchange message. This is used in DHE and ECDHE ciphersuites to communicate the ephemeral key for the connection. The server is saying ‘here’s the ephemeral key and here’s a signature, from my certificate, so you know that it’s from me’,” Langley wrote in his analysis. “Now, if the link between the ephemeral key and the certificate chain is broken, then everything falls apart. It’s possible to send a correct certificate chain to the client, but sign the handshake with the wrong private key, or not sign it at all! There’s no proof that the server possesses the private key matching the public key in its certificate.”
Oracle’s Demantra, part of the company’s Value Chain Planning suite of software, is fraught with vulnerabilities according to several bug disclosures issued over the weekend.
Researchers at the London-based computer security firm Portcullis claim the application is plagued by a four vulnerabilities that could allow an attacker to extract sensitive information, carry out phishing attacks, and modify content within the application, among other attacks.
The first problem, a local file vulnerability (CVE: 2013-5877) in the app could let an attacker harvest useful information from the web.xml configuration file or “download the whole web application source code,” according to a warning published Saturday.
A SQL injection vulnerability (CVE: 2014-0372) in Demantra could allow an attacker to extract authentication credentials and personal details from the app, along with the ability to modify content. From there, if an attacker added malicious code, they could deliver malware or target other exploits in client browsers. The security firm claims modifying content might be a bit more difficult because the attacker would have to execute a “blind” SQL injection attack and request many pages to get it to work, but still says it “does not prevent exploitation.”
A cross site scripting vulnerability (CVE: 2014-0379) in the app’s TaskSender could let an attacker execute script code in an authenticated user’s browser, which could lead to session hijacking.
This might be the most troublesome of all the bugs because it can open up a whole can of worms on top of the session hijacking.
With those credentials an attacker could then access the site as that user and perform actions as them, such as viewing and changing personal data and making transactions. The vulnerability can also be leveraged in a phishing attack in which an attacker can create a fake log-in page and get a genuine user to log in without knowing the site had been compromised.
Portcullis notes that in a worst-case scenario the attacker could even gain full control of a user’s computer if they used the XSS vulnerability to exploit any further vulnerabilities in browsers.
The last big vulnerability, a problem with the app’s backend is something the firm calls a Database Credentials Disclosure vulnerability (CVE: 2013-5795) and can let anyone retrieve the database instance name and corresponding credentials. This means that they could combine this issue with some of the others to steal database credentials.
Oliver Gruskovnjak, the chief technical officer at Portcullis pointed out all the vulnerabilities on Saturday on the company’s site and via seclists.org’s Full Disclosure mailing lists.
All issues are present in version 12.2.1 of Demantra, an analytical engine that Oracle produces that allows its users to keep track of demand management, trade planning and sales/operation planning.
Oracle just patched Demantra in January as part of its quarterly Critical Patch Update (CPU), fixing six bugs in the app, four of which were remotely exploitable without authentication. While Oracle didn’t immediately respond to a request on Monday it’s probably safe to say the company is busy working on patching these issues for its next CPU scheduled for release on April 15.
CloudFlare claims government requests for user data are affecting fewer than .017 percent of their two million global customers
The Web performance and security company yesterday issued the report in accordance with the Department of Justice’s new regulations for publishing information pertaining to law enforcement requests for user data. While the figure is necessarily a bit off – given that current law bars companies from including specific figures regarding domains affected by National Security Letters (NSLs) – the report suggests that the government has sought information on perhaps as many as 3400 of CloudFlare’s clients.
Their data reflect all requests as of December 31, 2013.
The company says it received 18 subpoenas last year, complying with only one of those requests. Another one request is still in process. The requests pertained to 17 separate domains but only affected one customer account.
CloudFlare says it pushed back on 16 subpoenas, all of which were rescinded. In some instances, the company claims court orders were issued in lieu of the original subpoena. In other cases, CloudFlare was simply not able to provide any information.
The company says it received 28 court orders, complying with 25 such orders. Two of the government requests remain in process. In total, court orders affected 227 domains under 38 customer accounts. For one of these court orders, CloudFlare was incapable of providing any information.
The company says it received three search warrant requests, one of which was eventually rescinded. They only ended up complying with one of the orders, though a second remains in process. The warrants affected four domains under one user account.
“In the rare instances where law enforcement has sought content such as abuse complaints or support communications, CloudFlare has insisted on a warrant for those electronic communications,” the company says. “To date, we have received no such warrants.”
The company received and complied with just one request for a pen register/trap and trace order that affected only one domain under one customer account.
In both 2012 and 2013, CloudFlare claims it received between 0-249 NSLs.
“Even assuming the high end of the range at 249 accounts affected,” the company wrote in its transparency report, “such national security orders would affect fewer than 0.02% of CloudFlare customer accounts.”
Under the new Justice Department rules, companies are allowed to report the reception of NSLs in batches of 250, starting with 0-249. In other words, no company is permitted to say that they received zero NSLs.
The company notes that the new rules are an improvement on the old ones, but “still consider[s] these new regulations to be an undue prior restraint on the freedom of speech.”
CloudFlare is also clear that has never turned over its SSL keys or its customers’ SSL keys to anyone; it’s never installed any law enforcement software or equipment anywhere on its network; it’s never terminated a customer or taken down content due to political pressure; nor has it ever provided any law enforcement organization a feed of its customers’ content transiting its network.
“If CloudFlare were asked to do any of the above,” the company claims, “we would exhaust all legal remedies, in order to protect its customers from what we believe are illegal or unconstitutional requests.”
CloudFlare’s report follows similar ones by AT&T, which received more than 2,000 NSLs, as well as Twitter, and various of the other tech giants, all of which seem to indicate that government requests for user data are on the rise.
SAN FRANCISCO–The security of data being transmitted over the Web relies on a large number of moving parts, from the integrity of the machine sending the data, to the security of the browser, to the implementation of encryption, to the fragility of the certificate authority system. Experts have been spending the best part of the last decade trying to address many of these issues, but there are still a number of hard problems to solve.
One of the most difficult of these is the way that users and browsers interact with the CA system and how the CAs handle certificate issuance and attempts to tamper with the system. In the last few years, a number of methods for addressing these issues have been proposed, and some of them show real promise, including the notion of certificate transparency. This is the work of some engineers at Google and the system is designed to provide a public log of every certificate that’s issued. The user’s browser also would receive a proof with each certificate. The logs themselves are append-only and cryptographically assured.
“When implemented, Certificate Transparency helps guard against several types of certificate-based threats, including misissued certificates, maliciously acquired certificates, and rogue CAs. These threats can increase financial liabilities for domain owners, tarnish the reputation of legitimate CAs, and expose Internet users to a wide range of attacks such as a website spoofing, server impersonation, and man-in-the-middle attacks,” Google’s description of the framework says.
The method requires the CAs to cooperate and submit their certificates to these public logs, and that’s one of the things that’s holding up its broad adoption.
“We need to get the CAs to change their behavior so they emit certificates this way,” Chris Palmer, a security engineer on the Chrome team at Google, said in a talk at TrustyCon here Thursday.
Palmer said that among the current proposal to help fix the trust problem online, he believes certificate transparency has the most potential for success. Some CAs have already committed to the idea, including Digicert and GlobalSign, one of the handful of larger certificate authorities in the world. Google is running a certificate log server and there is information available in Chrome on the certificate transparency status of a given site using SSL. Google also has set up a forum that lists certificate pushes and other issues surrounding the system.
But in order for the proposal to gain more steam, more CAs need to participate and agree to publish their certificates in this way. When that happens, it could have a significant effect on finding and revoking malicious or mistakenly issued certificates.
SAN FRANCISCO – As more Web-based services are encrypted, privacy advocates are concerned the next wave of aggressive surveillance activity could target automated update services that essentially provide Internet companies root access to machines.
Chris Soghoian, principal technologist with the American Civil Liberties Union, said today at TrustyCon that current malware delivery mechanisms such as phishing schemes and watering hole attacks could soon be insufficient for intelligence agencies and law enforcement such as the NSA and FBI.
“The FBI is in the hacking business. The FBI is in the malware business,” Soghoian said. “The FBI may need more than these two tools to deliver malware. They may need something else and this is where my concern is. This is where we are going and why I’m so worried about trust.”
Update services from Microsoft such as Windows Update Services have already been exploited in nation-state attacks such as Flame. Flame was a terribly complex attack that made use of collision attack to forge a Microsoft digital certificate to spoof the update service and allow infected computers to receive malicious updates from the phony service.
Soghoian said his concern is that the government will not only exploit the convenience of these update services offered by most large providers, but also that it will erode the trust users have in the services leaving them vulnerable to cybercrime, identity theft and fraud.
“There are really sound security reasons why we want automatic security updates. If consumers have to do work to get updates, they won’t, and they will stay vulnerable,” Soghoian said. “What that means though is giving companies root on our computers—and we really don’t know what’s in the code after fact. This is a point of leverage the government can use. We have no evidence they are using it right now, but these companies have a position of power over our devices that is unparalleled.”
Soghoian provided historical context to back up his overall claim that whatever access the government has to the intelligence is never enough. Going back to the early days of wiretapping 100 years ago, Soghoian said the government and law enforcement has always enjoyed a cozy relationship with telephone companies and today with telecommunications providers. Transparency reports published by Google, Facebook, Twitter, Microsoft and other giant Internet companies offer a window into the number of law enforcement requests these companies get for user data, as well as government requests related to matters of national security.
Soghoian also shared testimony going back as far as 2010 from former FBI general counsel Valerie Caproni, now a U.S. District Court judge in New York, who warned Congress repeatedly about how changes in technology will lead consumers to use Internet services that would be difficult monitor.
Soghoian cautioned that the government could take advantage of existing features in technology to get their way, citing as an example a feature in Google Android phone locks where if a user fails on their pattern to unlock their phone, Android will offer the user a prompt for the Google account credentials synched with the device. Soghoian said through Freedom of Information Act requests, it’s been revealed the government has asked for password resets on particular users in order to access their accounts or devices.
More concerning still is the government’s ability to use a court order to add features that do not exist in products currently. Skype, he said, was served with a directive from the Attorney General to modify its end-to-end encryption capabilities in order to give the FBI access to encrypted communication, something that was revealed in the Snowden documents.
“We still don’t know what Skype did and when, and what law was used,” Soghoian said, adding that Edward Snowden’s secure email provider Lavabit was also served with a similar court order for its SSL keys. Rather than remain complicit, Lavabit closed its doors. If update services are the next surveillance frontier, Soghoian hopes the respective companies remain vigilant, because the APIs used to deliver code can be used to deliver code to specific people.
“I would hope Google would fight that type of order all the way to the Supreme Court. The same goes for Apple and Microsoft and others,” he said. “I hope the companies we depend on and trust would fight.”
SAN FRANCISCO–The Lavabit case, which saw the secure email provider’s owner shut the company down after being forced to hand over to the government the encryption key that protected his users’ data, may seem like an extreme reaction to a unique situation. But, experts say it’s likely that there will be similar situations in the near future, and technology providers an users should change the way they think about what the threats to their data may be.
The FBI went to Lavabit’s founder, Ladar Levison, last year in the wake of the NSA revelations and demanded access to the encrypted emails of one of its users, Edward Snowden. After a lot of back and forth and legal wrangling, Levison eventually turned over the encryption key that protected the communications of all of his users, and then promptly closed the business. Marcia Hoffman, one of Levison’s lawyers, said that she believes there will soon be other cases like Lavabit.
“I don’t believe that Lavabit is a unicorn,” she said in a talk at the TrustyCon conference here Thursday. “We need to update our threat models. Ladar was worried about data at rest, not data in transmission. The threats are different than we thought. Security and privacy enhancing services are really in the crosshairs. To the extent that you design a service like Lavabit, you should be thinking about how you’re going to deal with government requests.”
Those threats now include not just attackers and cybercriminals, but governments and their lawyers. Hoffman said that the way the government is interpreting surveillance and wiretapping laws now has put technology companies in a difficult position. CALEA, the statute that requires telecom companies and others to help law enforcement agencies with lawful intercept and wiretapping operations specifically didn’t apply to information technology companies, she said.
“The government has taken the position that a service provider has to provide any information that the government wants,” she said. “If you don’t like turning over your keys, you can just backdoor your system. Putting this kind of pressure on Internet companies really flies in the face of what Congress decided.”
The Lavabit case is still wending its way through the court system, as Levison is appealing a contempt of court order against him. Hoffman said that the broader issues related to the case–the use of encryption and the government’s efforts to get at encrypted data–will only become more important in the months and years ahead. And she also warned users not to become too enamored of new, supposedly surveillance-resistant communications services that are springing up.
“If you don’t have a reasonable expectation of privacy in encrypted data, where do you have that expectation?” she said. “We need to stop making promises to users that we don’t know if we can keep, like NSA-proof email. I would be very skeptical of claims like that. I don’t know if anybody can actually make a promise like that.”
Dennis Fisher and Mike Mimoso run down the news from day two of the RSA Conference, including the new FBI director’s speech and preview Trusty Con.http://threatpost.com/files/2014/02/digital_underground_147.mp3
The official mobile application for the ongoing RSA Conference contains a half-dozen security vulnerabilities, according to an analysis performed by researchers from the security service provider IOActive.
IOActive chief technical officer Gunter Ollmann claims the most severe of the vulnerabilities could give an attacker the ability to perform man-in-the-middle attacks, injecting malicious code and stealing login credentials.
“If we were dealing with a banking application,” Ollmann writes, “then heads would have been rolling in an engineering department, but this particular app has only been downloaded a few thousand times, and I seriously doubt that some evil hacker is going to take the time out of their day to target this one application (out of tens-of-millions) to try phish credentials to a conference.”
While Ollmann notes that the man-in-the-middle vulnerability mentioned above is the most severe, he says the second most sever bug is actually more interesting. The application apparently downloads a SQLite database file that is then used to populate the app’s user interface with various conference information, like speaker profiles and schedules. Seems innocuous enough, but that database – for reasons that remain a mystery to Ollmann – contains the first and last names, employers, and titles of every user that has downloaded and registered with the application.
Ollmann admits he’s taking a bit of potshot at one of the premiere security industry conferences, but the point he is really trying to make, he claims, is a bigger one.
“Security flaws in mobile applications (particularly these rapidly developed and targeted apps) are endemic, and I think the RSA example helps prove the point that there are often inherent risks in even the most benign applications,” he said.
SAN FRANCISCO – Outgoing FBI Director Robert Mueller predicted to his successor James B. Comey that cybersecurity would dominate his 10-year tenure much the same way terrorism did Mueller’s.
“After five months, he’s right,” Comey said today during his keynote address at RSA Conference 2014.
Comey’s first appearance at RSA was a breezy 30-minute monologue disguised as a familiar plea for enhanced information sharing between the public and private sectors, in addition to a rundown of the nation-state and criminal threats facing the U.S., and an announcement that the FBI will release an unclassified version of its Binary Analysis Characterization and Storage System (BACSS) malware repository later this year.
“Send your malware sample to us, and you get a report back in a matter of hours on how it works, what it’s targeting and where it’s been seen elsewhere,” Comey said. “We hope to get BACSS on the same level as our repositories for fingerprints, criminal records and DNA.”
That seems to be an easier goal for Comey to attain than the information sharing vision he outlined. The NSA’s surveillance activities have made corporations gun shy of the government, and talk of machine-to-machine communication in real time about threats and vulnerabilities is sure to make some uneasy.
“I imagine a day when intelligence sources, the government, antivirus companies, financial and communications companies share machine data instantaneously,” Comey said. “To do all that, we need an automated intrusion detection system, in a standard language and native format, to all communicate in real time. And we must do this and be mindful of the need to protect privacy.”
Government agencies already are mandated to deploy the Einstein IDS on all network gateways in order to monitor traffic for attacks. The NSA has a proposal out for an enhancement to Einstein that would allow for monitoring of government traffic on private sector computers.
“We need help,” Comey said. “You are victims, and the key to defeating cybercrime as well. The information is on your servers; you have the expertise and knowledge to help us and we are actively trying to listen.”
Information sharing has never managed to clear significant hurdles. Private companies are in no hurry to give the government access to networks to investigate attacks or collect forensic information. Attack data must also be sanitized so as not to expose companies to additional attacks or hurt their competitive standing. Comey, who spent the latter half of his career working as general counsel at Lockheed Martin and Bridgewater Associates, tried to assure the audience he understood their hesitation.
“We don’t do a good job clarifying what we need to do,” Comey said. “There’s no unifying threat reporting system. Who in government is responsible for what in terms of cybercrime? I get that. I know where you’re coming from.
“Information always seemed to flow in one direction toward government,” Comey said. “No doubt government has information it cannot always share for reasons that I’m sure make sense to you. We will share much as we can, as quickly as we can, and in the most usable format.”
*James Comey image via RSA
SAN FRANCISCO – Privacy has been in a stranglehold for a long time. Some believe it’s a fleeting concept done irreparable harm by the Snowden revelations. Others believe it’s merely in a transition until the norms of Internet behavior are sorted out.
The privacy chiefs of Google, Microsoft and Intel Security tried to do some table-setting today at RSA Conference 2014, lobbying for even more transparency in reporting government requests for user data and explaining how the notion of privacy has been flipped on its ear.
Microsoft chief privacy officer Brendon Lynch reiterated a point made by CSO Scott Charney a day earlier that Microsoft has never had a request for bulk of data, and how it would fight such a request if it ever landed on their desk. Google senior corporate counsel Keith Enright enforced the point too that the company never granted the government direct access to internal Google systems.
“Our leadership has been clear and vocal,” Enright said. “We are leading the charge for increased transparency. It’s important people understand what their governments are doing. Transparency is the only way we have accountability and are able to push for change.”
Recently, the Department of Justice reached a settlement with major Internet companies, establishing new reporting norms around government requests for data in the form of National Security Letters and Foreign Intelligence Surveillance Court orders. Since material in the Snowden documents intimated on several occasions that the National Security Agency (NSA) had some sort of access to Google, Facebook, Yahoo and others, the providers believed their hands were tied by reporting restrictions.
“Some of the reporting was hyperbolic and there were some frustrations not being able be as transparent as we could be,” Lynch said. “We are now able to refute reports of unfettered access to data and bulk collection. I’m happy report that those requests impact only a fraction of 1 percent of customer data.”
The panelists, including Intel Security chief privacy officer Michelle Dennedy, emphasized too that this is a global issue impacting economies worldwide since the NSA proved it had a long reach when it was disclosed the agency was tapping overseas connections between data centers to intercept data.
“It’s not just one government, it’s all governments,” Dennedy said, emphasizing the S.
On a less global scale, the experts contemplated how Web-based services and social interaction online necessitates a compromise and redefining of privacy.
“Privacy is about user expectations. User engage with a service and on some level recognize data moves,” Google’s Enright said. “This requires us to be responsible, inform users and give them meaningful control. If a service is appropriately designed and users understand what they’re engaging with and the transactions their making, I don’t think there’s a net loss of privacy.”
The panelists also lamented the depth and breadth legal counsel requires them to provide in privacy statements that are largely ignored by users. Users are also forced to make changes to settings in services or browsers they’re unlikely to seek out or understand. Google, for example, offers its users the option to turn off behavioral targeting within their ad settings. The Do Not Track option in web browsers is also available to users, but standards hang-ups leave it up to websites to decide whether they’ll honor the DNT signal.
“Privacy is not synonymous with secrecy,” Dennedy said. “It’s the processing of personally identifiable information. You’re not giving up privacy when you choose to communicate with another person. We want our customers to stick with us. We want transparency to be a differentiator; that’s a good thing.”
It’s only been a few days since Apple fixed the nasty certificate-validation “goto fail” vulnerability in iOS and OSX and now word comes that another bug, one that could allow an attacker to monitor keystrokes on iOS 7 devices without the user being any the wiser, also exists.
The problem apparently exists on iOS 6.1.x and the following versions iOS 7 versions: 7.0.4, 7.0.5, and 7.0.6.
Researchers at FireEye found the problem which could essentially permit someone to keep track of “all the user touch/press events” happening in the background of one’s iPhone or iPad. Some of the details an attacker could glean include knowing whenever a user touches the screen, presses the home button, increases or decreases the volume or uses Touch ID, the fingerprint identity sensor Apple shipped with its latest iPhone 5S device.
The bug relies on bypassing the phone’s “background app refresh,” a resource that the operating system would normally allocate for apps to run in the background. The feature is intended to help users multitask and easily switch between recent apps they’ve used.
The FireEye researchers, Min Zheng, Hui Xue and Tao Wei, were able to exploit the hole by creating a proof-of-concept monitoring app they claim could bypass Apple’s app review process and cause trouble as long as they could get the victim to download the app via phishing. The researchers claim they could also “exploit another remote vulnerability of some app and then monitor in the background.”
While the app can’t tell exactly which keys the victim is pressing, it does provide a list of time-stamped X and Y coordinates that an attacker could later decode to determine which buttons were pressed.
Keyloggers or apps that allow background monitoring for iOS devices are nothing new. Multiple apps can run silently in the background on phones, mining users’ activities. The apps can be purchased, some for a lump sum, some for a subscription fee, through third party merchants. The caveat for those apps is that the device needs to have been jailbroken. In this situation however the FireEye researchers claim that non-jailbroken devices could also be vulnerable.
According to the blog the group wrote Monday for their exploit they used a non-jailbroken iPhone 5S running on version 7.0.4.
The trio claim they’re in the middle of collaborating with Apple on the bug. While a fix is coming, if anyone is truly concerned about being spied on in the meantime, the researchers recommend selectively killing apps running in the background, which users can do by pressing the home button twice and swiping an app up and out of preview to disable unnecessary or suspicious applications.
SAN FRANCISCO–The concept of threat modeling has evolved quite a lot in the last few years, moving from an activity that massive software companies such as Microsoft and Google use to anticipate and defend against potential threats to their products to something that many smaller organizations practice. Starting a threat modeling system can seem daunting, but the good news is that there’s no one right way to do it, just the right way for a given organization.
Microsoft has been using some form of threat modeling internally for many years now and the company’s security group has spent a lot of time speaking publicly about the benefits of the practice and advocating for wider adoption of it. Adam Shostack, a program manager in Microsoft’s Trustworthy Computing group, has been one of the main proponents of threat modeling’s use, and he said that he’s reached the conclusion that threat modeling is not one defined set of methods or principles but a fluid and dynamic way of reducing security risks to products and services.
“I now think of threat modeling like Legos. There are things you can snap together and use what you need,” he said during a talk at the RSA Conference here Wednesday. “There’s no one way to threat model. The right way is the way that fixes good threats.”
Security experts often will tell developers that in order to build defensible and resilient products, they need to think like an attacker. That is, look at the product or system the way that a potential adversary would see it, find the weak spots that are ripe for exploitation and correct them. But Shostack said that isn’t exactly the most useful advice.
“Being told to think like an attacker is like being told to think like a professional chef,” said Shostack, who recently published a new book on the topic, Threat Modeling: Designing for Security. “A lot of security people like to cook, but if someone told you to go to the store and buy enough chickens for a restaurant that seats 78 people and turns over three times a night, you’d have no idea what to do.”
As with nearly everything in security these days, there are a number of methodologies, models, checklists and other aids designed to help organizations implement threat modeling. Those tools can be useful and have their places, Shostack said, but none of them should be seen as the perfect answer. Rather, use them as part of the process of putting building blocks in place as you construct a threat modeling program.
“We want to focus on finding good threats. Use your assets and the actions of attackers to make threats real,” he said. “It’s hard to go from a checklist to a broader system. You have to think about threat modeling your software as an end-to-end process.”
Of course, even the best and most well-constructed threat modeling program still has to deal with the most unpredictable and dangerous threat to the product: the end user. Trying to predict how users will misuse, abuse and break a piece of software is a fool’s errand, but Shostack said it’s still up to the professionals to put their products in the best position to survive in today’s environment.
“To tell people that they can’t use their computers for what they want it a battler we’re going to lose over and over again,” he said. “People don’t buy their computers to be secure. They buy them to watch dancing babies.”
SAN FRANCISCO — Two zero-day vulnerabilities in Avaya’s latest one-X 9608 IP telephones have been discovered and are expected to be patched on Friday by the provider.
Researcher Ang Cui, a Ph.D. candidate at Columbia University and chief scientist at Red Balloon Security, will demonstrate an exploit and provide details on the previously unreported vulnerabilities during a presentation, also on Friday at RSA Conference 2014.
Cui has previously discovered zero-days in other network enabled embedded devices. He said the Avaya bugs are remotely exploitable, the exploits are relatively simple, and potentially millions of phones are at risk (Avaya and Cisco are IP phone market share leaders).
“It will absolutely compromise the phone remotely,” Cui said. His presentation will include a demonstration of a worm he wrote that remotely exploits the bug and exfiltrates raw audio data by turning the circuit board into a radio transmitter.
“It will do real-time speech detection and transmit a text transcript,” Cui said.
Dr. Salvatore J. Stolfo, a director at Red Balloon and advisor at Columbia University where Cui is a Ph.D. candidate, said the phone will continue to function as intended, but will also be turned into a listening post.
“With the receiver on the hook, the phone will transmit over the network,” Stolfo said. “You can spy on someone in an office if you are able to inject malcode remotely.”
The exploit, Cui said, bypasses security appliances scanning for malicious outgoing network traffic. He said the same attack is applicable to other embedded network devices such as printers and routers.
Cui and Stolfo said an attacker would be able to pivot from other vulnerable embedded devices on the network as well, again eluding detection by IPS and other security technology. Cui’s worm, for example, begins with a printer exploit of a 2011 firmware vulnerability which replaces the existing code with malicious firmware. An attacker would need to entice the victim to print, for example, an attachment containing the embedded malicious firmware. Once executed, the malware establishes a backdoor and awaits commands; the attacker could scan for other embedded devices such as IP phones and routers listening on the same port.
More than a year ago, Cui demonstrated an attack against a Cisco VoIP phone that also turned it into a listening device. He was able to put code on the phone by installing—and then removing—an external circuit board from the Ethernet port on the phone. Then using his smartphone, Cui was able to spy through the phone even though its Off-Hook switch was enabled. Cui said he was also able to pull of the same attack remotely, without the need for physical access to the device.
Dennis Fisher and Mike Mimoso discuss the happenings on day one of the RSA Conference, including Art Coviello’s keynote and what makes the NSA mad.http://threatpost.com/files/2014/02/digital_underground_146.mp3
SAN FRANCISCO–Of the small pool of people who have seen the Snowden documents, few, if any, are as technically savvy and knowledgeable about security and surveillance as Bruce Schneier. And after reading through stacks and stacks of them, Schneier says that yes, the NSA is extremely capable and full of smart people but “they are not made of magic”.
A cryptographer by training and a security thinker by trade, Schneier has spent many hours reading the Snowden documents and thinking about what they mean, both in terms of the NSA’s actual capabilities and their effect on data security and privacy. Much of the news, clearly, is not good on that front. The NSA has a dual mission: to protect the communications infrastructure of the United States and to eavesdrop on the communications of foreign nations The agency, Schneier said, is very, very good at both of those missions, but it’s the eavesdropping piece that has grown exponentially in recent years as the Internet and mobile devices have became pervasive.
“The NSA has turned the Internet into a giant surveillance platform, one that is robust politically and technologically,” Schneier said during a talk at the RSA Conference here Tuesday. “When you have the budget of the NSA and you have the choice to get the data this way or get it that way, the correct answer is both. Fundamentally the NSA’s mission is to collect everything, and that’s how you have to think about it.”
That collect-everything mentality is enabled by the vast budget, reach and computing power that the NSA has at its disposal. Those advantages allow the agency to not just collect, but store, virtually any amount of data it chooses. But one of the NSA’s other key assets–and perhaps its largest advantage over other intelligence agencies–is its brain power. The agency employs an untold number of top mathematicians and cryptographers and computer scientists, and they all work on solving difficult problems. One of their tasks is overcoming a key obstacle for NSA data collection: encryption.
The NSA is known to be working on an unspecified capability to defeat SSL, and Schneier said that while he hasn’t seen any direct evidence of what that capability might be, there are a number of possibilities.
“My favorite idea right now is elliptic curves. If they know that certain curves are weak they could then try to get algorithms using those curves,” he said.
Other possibilities are some kind of factoring breakthrough, a successful attack on the RC4 cipher, which is known to have some problems already, or a method for exploiting weak random-number generators. But even with all of the resources at its disposal, the NSA currently has a difficult time dealing with encrypted traffic, Schneir said, and that’s something that users should use to their advantage.
“The NSA can’t break Tor and it [ticks] them off. Most crypto drives the NSA batty,” he said. “Encryption works and it works at scale. The NSA may have a large budget than all of the other intelligence agencies combined, but they are not made of magic. Our goal should be to make eavesdropping more expensive. We should have the goal of limiting bulk collection and forcing targeted collection.”
Schneier added that now that many of the NSA’s methods and tools are out in the open, it’s reasonable to expect other agencies, as well as other classes of attackers, to adopt some of them.
“These techniques are spreading. Figure that this is a three to five-year window for cybercriminals to use them,” he said. “Today’s NSA programs are tomorrow’s PhD theses and the next day’s hacker tools. Surveillance is the business model of the Internet.”