Threatpost for B2B
A little-known policy through which the Departments of Justice, Defense, and Homeland Security offered prosecutorial immunity to companies that helped the U.S. military monitor Internet traffic on the private networks of defense contractors has reportedly been expanded by Executive Order to include a score of other “critical infrastructure” industries, according to information obtained as part of a Freedom of Information Act lawsuit filed by the Electronic Privacy Information Center (EPIC).
EPIC writes that the pilot-version of the program was brought to light in June 2011 after the Washington Post published a report detailing the implementation of a new program by National Security Administration that let them monitor traffic flowing from some defense contractors through certain Internet service providers. At the time, the Washington Post quoted Deputy Defense Secretary William J. Lynn III saying that the program was designed to help thwart attacks against defense firms and that the government hoped to expand the program moving forward.
The documents obtained in the FOIA request, EPIC said, reveal that the DoD advised private industry organizations on the ways in which they circumvent federal wiretap laws in order to aid the DoD and DHS in their surveillance of private Internet networks belonging to defense contractors.
EPIC, digital rights group the Electronic Frontier Foundation, and others are concerned that this program is being expanded to apply to the broad swath of organizations that potentially fall under the increasingly vague category of “critical infrastructure.”
The government has not yet named the program, but EPIC claims that the NSA has partnered with AT&T, Verizon, and CenturyLink in order to keep tabs on the Internet traffic flowing into and out of some 15 defense contractors, including Lockheed Martin, CSC, SAIC, and Northrop Grumman.
For its part, the NSA has said that it is not directly monitoring these networks, but is rather filtering their traffic in order to detect the presence of suspicious packets based on a number of malicious code signatures that the agency has developed.
EPIC issued a FOIA request in July 2011 requesting the following information: “All contracts and communications with Lockheed Martin, CSC, SAIC, Northrop Grumman, or any other defense contractors regarding the new NSA pilot program; All contracts and communications with AT&T, Verizon, and CenturyLink or any other ISPs regarding the new NSA pilot program; All analyses, legal memoranda, and related records regarding the new NSA pilot program; Any memoranda of understanding between NSA and DHS or any other government agencies or corporations regarding the new NSA pilot program; Any Privacy Impact Assessment performed as part of the development of the new NSA pilot program.”
The government failed to provide any of this information. So, EPIC filed a FOIA lawsuit on March 1, 2012 and was eventually granted access to thousands of pages of previously unreleased documents, which they have posted on their website.
Photo courtesy of Flickr user TexasGOPVote.com, Creative Commons
Google has released a new Transparency Report, this time pointing out sharp increases in the number of government requests from Brazil and Russia it received to remove content from Google-branded websites.
This is the seventh time the Mountain View-based company has released the report that provides details on how many countries have appealed to the company to remove potentially controversial content over a specific span of time.
In total, Google received 2,285 government requests to remove 24,179 different types of content from July to December 2012, up from 1,811 requests and 18,070 pieces of content from January to June 2012.
Google’s Legal Director Susan Infantino broke down the numbers in a post on its blog Thursday. Complaints from Brazil are up, 697 in the second half of the year compared to 191 in the beginning of the year along with complaints from Russia, up 114 from six. Both of the increases stem from congressional shifts in those countries. Brazil held municipal elections last fall and half of that country’s requests called for the deletion of potentially defamatory candidate content. In Russia a new law was implemented that allows government authorities to blacklist and take down websites that contain content harmful to children. More than 100 of the requests from Russia pertained to that law.
Google has been releasing the reports every few months – already this year in January and March – in hopes of making it clear for users what governments are doing when it comes to censorship online. Google has made it clear that it’s receiving more and more requests to remove blog posts, especially those that contain politically tinged content, over time.
This version of the report is the first where Google has begun breaking down exactly when it blocked and unblocked certain videos on YouTube in particular countries.
As part of one request, Google responded to 20 countries that wanted a controversial movie deleted from YouTube. Google went on to restrict clips from Jordan, Malaysia and other nations and temporarily restricted views for the video in Egypt and Libya. The film, “Innocence of Muslims” has fostered a vicious fight over freedom of speech and censorship online since its release last summer. It has also been the motive for a string of denial-of-service attacks against a number of leading U.S. banks.
“While the videos were within our Community Guidelines, we restricted videos from view in several countries in accordance with local law after receiving formal legal complaints,” Infantino wrote.
The report is the third of its kind for Google this year and follows similar reports from Twitter in January and Microsoft in March regarding the disclosure of information requests via law enforcement. The reports are being seen as a welcome trend in the security industry; as Threatpost editor Dennis Fisher put it last month, “it’s time for these disclosures to become as commonplace as quarterly earnings reports.”
If three reports in four months from Google - even if each one is breaking down relatively the same information – is a sign, it’s looking like it’s a promising trend.
Adobe has named Brad Arkin to the newly created position of CSO, a major expansion of responsibilities for Arkin, who has been leading the company’s product security and privacy initiatives.
Adobe has been in the security spotlight for several years now, as attackers have focused their attention on the company’s portfolio of products that enjoy user counts in the billions. Flash and Reader have been frequent targets for attackers who are always on the lookout for vulnerabilities in widely deployed applications, which give them the best chance of compromising a high number of users. Exploits for Adobe products often pop up in the commercial exploit kits such as Cool, Blackhole and others and Flash and Reader zero days are highly prized in the hacking underground.
As the threats to Adobe’s products have escalated, so too have the company’s efforts to combat them. Arkin joined the company in 2008, just as Adobe was emerging as a key target. Before that, attacker mainly had focused on Microsoft, Oracle and browsers, but the ubiquity of Adobe’s products drew their attention. Arkin began addressing the problem from the bottom up, implementing a software security program designed to help developers write more secure code and eliminate vulnerabilities before products ship. The company joined the BSIMM program to help measure the effectiveness of the security development lifecycle and also began implementing countermeasures in its products to help prevent exploitation of vulnerabilities.
One of the key changes Arkin’s team made was the implementation of a sandbox for both Flash and Reader. The sandbox helps prevent an attacker from using a bug in a protected application to break out and gain control of the underlying operating system. With Flash running on more than a billion machines, that protection gives users of modern versions good protection.
In his new role, Arkin will continue to run the company’s ASSET security research team and the PSIRT product response team, but also will have responsibility for Adobe’s worldwide infrastructure security.
“In my new role, I have the opportunity to lead Engineering Infrastructure Security, a team that builds and maintains security-critical internal services relied on by our product and engineering teams, such as code signing and build environments. I will also continue to manage and foster two-way communication with the broader security community, a vital part of the central security function,” Arkin wrote in a blog post.
“The driving goal behind our security work is to protect our customers from those who would seek to harm them. Adobe has some of the most widely-deployed software in the world and we are keenly aware that this makes us a target.”
It’s not quite the development freeze Microsoft underwent during the Trustworthy Computing push, but it’s a start for Oracle, which will delay the release of Java 8 until Q1 of next year, largely because the platform and browser plug-in is such a security disaster.
This year has done nothing but reinforce that notion. Start where you will, with any number of zero-days, watering hole attacks, or a pair of takedowns at Pwn2Own, Java has taken a beating from hackers in 2013 and apparently enough is enough.
Mark Reinhold, chief architect of the Java Platform Group, took to his personal blog last week to announce that the next version won’t make its scheduled September GA date.
“Maintaining the security of the Java Platform always takes priority over developing new features, and so these efforts have inevitably taken engineers away from working on Java 8,” Reinhold said. “Looking ahead, Oracle is committed to continue fixing security issues at an accelerated pace, to enhance the Java security model, and to introduce new security features. This work will require more engineer hours than we can free up by dropping features from Java 8 or otherwise reducing the scope of the release at this stage.”
In other words, see ya next year Java 8. Not that many people would miss it.
For months, you’ve had experts from a number of security, development and IT organizations tell you flat out: “Disable Java.” And for the average Web user, that’s a feasible strategy. Disabling the plug-in won’t impede the average browsing experience. Websites functionality won’t be impaired and you’ve lessened your exposure to exploits targeting the technology. It’s on the business end where disabling Java becomes a sticky proposition. Any number of home-spun applications rely on Java, as do some pretty well-deployed commercial mobile banking, e-government and enterprise services applications. Disabling Java means real costs to those organizations and an impact on availability of services.
So that puts the onus on Oracle to right its ship in a hurry. Larry Ellison has yet to issue a landmark Gates-esque memo, but maybe he should. Rather than Unbreakable, maybe Ellison should formally put the capital-B Broken label on Java. The industry would surely say “No, duh, Larry,” but it’s a start—admitting you have a problem is generally considered the first step on the road to recovery.
Java is everywhere, making it an attractive target for hackers. Exploits targeting previously unreported vulnerabilities have been folded into a number of popular commercial malware kits. You can also find free attack code on Pastebin and a number of other online sources. It pays to attack Java; just ask the Tibetans, the defense industrial base, mobile developers at Twitter, Apple, Microsoft and Facebook, and any one hosting a website that’s been popped by a Java exploit since Christmas.
It’s a mess.
Not that Oracle hasn’t tried. A slew of security enhancements have been added to Java in recent months around code signing and new prompts warning users that a Java applet could be unsafe. The warnings have shields, are color-coded and there’s bold red text hammering the message home. Neat. Problem is that, much like Microsoft back in the day, by taking this approach Oracle tries to turn the user into a security admin. Users don’t want to be admins. They want their apps. They will click Yes, Run, Save, Execute—whatever it takes to get their apps or funny cat video. And hackers know this. And they’ll trick users into clicking on a harmful applet by spoofing Oracle’s dialog box and security warnings, twisting and turning them in their favor.
Locking down Java 8 is a start. Oracle is putting some key features on hold with this decision and has given itself a yearlong cushion to get its security house in order. For years security experts have been asking Oracle when its Trustworthy Computing moment will come and maybe this is the start. As Reinhold confirmed, security will be a priority going forward.
“If we sacrifice quality in order to maintain the schedule,” he wrote, “then we’ll almost certainly repeat the well-worn mistakes of the past, carving incomplete language changes and API designs into virtual stone where millions of developers will to work around their flaws for years to come until those features—or the entire platform—are replaced by something new.”
Twitter is facing increased pressure to beef up authentication for users after the hijacking of another high-profile account yesterday caused some temporary tremors on the stock market.
The social network has reportedly been testing two-factor authentication internally; Twitter lags behind Google, Facebook, Microsoft and Apple in implementing a two-factor authentication system. Wired claimed in a report published last night that the micro-blogging giant has developed a two-step login feature. A source told Wired that Twitter plans on incrementally rolling the authentication feature out to its users as soon as internal testing wraps up.
This comes on the heels of a series of false tweets from a hijacked Associated Press Twitter account claiming that President Barack Obama had been injured in a series of explosions near the White House. An AP reporter Mike Baker tweeted that the hijacking came less than an hour after some at the AP received an “impressively disguised phishing email.” The false report caused a temporary plunge of 143 points on the Dow Jones Industrial Average.
White House press secretary Jay Carney almost immediately dispatched any concerns by announcing in a press briefing that he had just been with President Obama and that the president was perfectly fine. Once it was clear that the tweet was a fraud, Twitter and the AP quickly suspended this and other AP accounts, and, just as rapidly as it fell, the Dow Jones returned to previous levels.
The Associated Press would later confirm the compromise, saying the Syrian Electronic Army, a pro-Bashar al-Assad regime hacker group, had claimed responsibility for a hack that was preceded by a phishing attack campaign on AP networks. Contrary to what has been widely reported, the AP did not say with any degree of certainty that this account takeover resulted from the earlier phishing campaign.
Two-factor authentication systems require users to authenticate themselves with one mechanism, usually a password, before asking them to authenticate with a second, usually a numeric code sent via SMS to a mobile device. There are variations on how two-factor systems work. Some of the better ones include a physical token or even a biometric identifier as one of the factors. The reality though is that even a rudimentary SMS-based second factor of authentication, like those used by Google and Facebook, would have made it much more difficult for any attacker to hijack AP’s Twitter account (if the AP had the feature turned on).
The Syrian Electronic Army has carved itself a niche with its Twitter takeovers. The Pro-Syria group claimed responsibility for attacks in which it wrested control of National Public Radio accounts last week and British Broadcasting Corp. account last month, according to a New York Times report.
To its credit though, the hacker collective hasn’t limited itself to hijacking Twitter accounts and publishing alarming but ultimately untrue tweets. In September 2011, the SEA allegedly hacked into and defaced a Harvard University site in an apparent, but unclear, attempt to promote the embattled Assad regime. The hacktivist group has reportedly taken credit for similar attacks targeting the Twitter accounts of Al-Jazeera English, Reuters, and CBS and may have also target the Qatar Foundation, FIFA, Human Rights Watch, and Colombia University.
Twitter account takeovers happen all the time, but usually involve low skilled hackers guessing bad passwords or using automated tools to break weak ones – as opposed to the sort of sustained phishing campaign that numerous sources have suggested enabled the AP hijack. It is probably safe to say that a Twitter account takeover has never caused the amount of grief that yesterday’s did. Fox News suffered a similar breach last summer when hackers took over their politics-specific Twitter account and announced that the President had been assassinated while campaigning in Iowa. The Fox News incident grabbed headlines, but its impact paled in comparison to the almost identical mishap that plagued the more prestigious AP yesterday.
“This latest attack shows just how devastating the impact of hacktivist groups can be as the fake news which was spread from AP’s compromised Twitter account was enough to cause panic on Wall Street for a few moments, making the Dow Jones index plummet by more than 150 points,” said a Kaspersky Lab spokesperson.
A pair of popular WordPress plugins used to help sites cache content have fixed serious vulnerabilities that attackers could exploit simply by including special HTML code in a comment. Both WP Super Cache and W3 Total Cache contained a vulnerability that allowed for PHP code injection through a simple attack vector, but both plugins have now been updated to address the vulnerability.
The vulnerability was in the way that the plugins handled dynamic snippets included in the comments on sites with one of the plugins enabled. An attacker who found a vulnerable site would be able to execute arbitrary code on the backend server. The developers of both plugins have patched the vulnerability and so details of the bug have now become public.
“As a result, blogs with WP Super Cache (before version 1.3) and W3 Total Cache (before version 0.9.2.9) were at risk of PHP code injection. Blog comments could contain dynamic snippets (in HTML-comments) and WordPress core did not them filter out. Upon a such a malicious comment having been submitted, a new cached version of the page was created that included the injected PHP-code. Upon the first request of the cached page, that code was successfully executed,” Frank Goossens, a Belgian blogger wrote in a description of the problem.
First word of the vulnerability appeared in a WordPress user forum about a month ago, and the original poster included detailed code that demonstrated the vulnerability. Last week, Donncha O Caoimh, the author of WP Super Cache, said that he was releasing a new version of his plugin and would add a feature in a future version to disable a function that was one of the causes of the vulnerability.
“I’ve just released a new version of WP Super Cache that removes the html comments from user comments. I’ll publish a post about it in a few days time after most people have hopefully upgraded their sites. In the next release (1.4) I’m going to disable mfunc and associated functions by default because I suspect most users don’t even use them. Admins will have to enable them on the settings page,” O Caoimh wrote.
The hugely popular WordPress publishing platform is is used by a wide variety of users, including professional publishers and individual writers. There are hundreds of plugins available for the platform that perform all kinds of tasks, from preventing spam comments to enabling the site to run on mobile platforms, and attackers often target vulnerabilities in those plugins, as they know that users may not update them as often as they should. Just as browser extensions and plugins such as Flash and Java have become favorites of attackers, so too have the WordPress plugins.
Serial port servers are admittedly old school technology that you might think had been phased out as new IT, SCADA and industrial control system equipment has been phased in. Metasploit creator HD Moore cautions you to think again.
Moore recently revealed that through his Critical IO project research, he discovered 114,000 such devices connected to the Internet, many with little in the way of authentication standing between an attacker and a piece of critical infrastructure or a connection onto a corporate network. More than 95,000 of those devices were exposed over mobile connections such as 3G or GPRS.
Serial port servers, also known as terminal servers, provide control system or IT administrators with remote access to non-networked equipment, enable tracking of physically mobile systems, or out-of-band communication to network and power equipment in case of outages. Not only do they provide serial port connections to devices, but many are wireless-enabled.
“The thing that opened my eyes was looking into common configurations; even if it required authentication to manage the device itself, it often didn’t require any authentication to talk to the serial port which is part of the device,” Moore told Threatpost. “At the end of the day, it became a backdoor to huge separate systems that shouldn’t be online anyway. Even though these devices do support authentication at various levels, most of the time it wasn’t configured for the serial port.”
Attackers who are able to gain access to the serial port are golden because once they’re on the server, the device assumes they are physically present and doesn’t require an additional log-in, Moore said. Making matters worse, he added, automatic log-offs are not enabled.
“So an administrator who logged into a device like an industrial control system, an attacker can follow behind them and take over an authenticated session to a serial port,” Moore said. “There are a huge number of devices out there are exposing an interactive administrative or command shell without any authentication because an administrator had previously authenticated and left the session open.”
An attacker with essentially undetectable access is able to capture or manipulate data moving through the serial port. Moore said it would be possible to add a signature to the device, for example that any time the word password appears, that UDP packet and the entire serial session could be mailed to a third party.
“If you’re looking to steal data, you could write a rule where it emails you the data you care about as it floats across the serial port,” he said, adding that attackers could mess with anything from HVAC, to oil pipelines, traffic signal or even corporate VPN connections, essentially opening a backdoor into a company’s networked resources.
Access to a remote serial port happens via a log-in over telnet, SSH or Web interface, Moore said. You could also connect to a specific TCP port that acts as a proxy for the serial port. Telnet, SSH or a Web interface requires authentication, however, an attacker could telnet into a TCP connection without authentication because the devices are configured under the assumption that anyone with access is physically connected to the serial port. Moore said he found more than 13,000 root shells, system consoles and admin interfaces that did not require authentication or were pre-authenticated. However, Moore said he was unaware of any attacks.
“Seeing how much stuff that’s out there, it’s kind of surprising no one has,” Moore said. “You don’t need to know anything about serial ports to start exploiting this stuff. If you scan, you start seeing random authenticated router shells popping up. For an attacker, they don’t have to know that’s a serial port, they’ll just say ‘hey cool, a shell.’”
As far as remediation, Moore said he is trying to bring awareness to the issue now and is encouraging companies to only use encrypted management services, require authentication for serial ports, enable activity timeouts for serial consoles and other best practices.
Photo courtesy HD Moore.
Microsoft has released a new version of the MS13-036 patch that was causing some customers’ machines to crash. The company had recommended in the days after the original fix was first released that customers uninstall the MS13-036 patch while Microsoft investigated the cause of the problems.
The new fix that Microsoft released on Tuesday resolves some conflicts with third-party applications that apparently were causing the blue screen issues for some people. The company didn’t specify which software was causing the crashes, but said that the update should resolve the problems.
“We’ve determined that the update, when paired with certain third-party software, can cause system errors,” said Trustworthy Computing group manager Dustin Childs at the time that the patch was recalled earlier this month.
The MS13-036 patch fixes a pair of race condition vulnerabilities in the Windows kernel, both of which could be used for code execution. However, the patch was rated important rather than critical because an attacker would need physical access to a vulnerable machine in order to run code using one of these bugs.
Childs said in a blog post Tuesday that customers should install the revised update as soon as possible.
“As we previously discussed, we stopped distributing this update when we learned some customers were having issues. The new update, KB2840149, still addresses the Moderate security issue described in MS13-036, and should not cause these issues. If you have automatic updates enabled, you won’t need to take any actions. For those manually updating, we encourage you to apply this update at your earliest convenience,” he said.
It’s a familiar refrain: Attackers often have months of unfettered access to corporate networks; and security and network managers remain in the dark until they’re notified of serious breaches by third parties.
Enterprises, regardless of industry, dread that fateful knock on the door by the FBI, card brands or fraud detection services informing them that an external group has been moving data off their network for months. Yet it’s happening with greater frequency and with devastating consequences in some cases, according to the 2013 Verizon Data Breach Investigations Report (DBIR).
This year’s version of the DBIR has quantified not only financially motivated attacks, but also those carried out by state-sponsored attackers targeting intellectual property or military secrets. The numbers in the report paint a representative picture of the state of affairs for companies that value IP such as those in manufacturing and telecommunications, and the numbers aren’t pretty. Sixty six percent of breaches remain undiscovered for months or longer, up from 55 percent in 2011 and 41 percent in 2010.
Targeted attacks and attacks motivated by espionage represent 21 percent of the 621 breaches investigated by Verizon’s RISK Team and those attacks account for the inflated numbers representing the time from initial compromise to discovery, Verizon said.
“That pits the virtually unlimited resources of a nation against the very finite resources of a single company. Nobody can reasonably be expected to withstand that,” the DBIR says, adding that while prevention remains an important part of any security strategy, more investment must be made in detection and response to breaches that result in data loss.
This year’s report paints a gruesome picture, one where most companies are compromised and lose data in a matter of hours. Financially motivated attacks that rely on relatively simple SQL injection attacks or compromises of remotely accessible point-of-sale systems guarded by weak or default credentials beef up those numbers substantially. Attackers are able to break those systems in a matter of seconds or minutes. And initial compromises in financially motivated attacks are not difficult, according to the DBIR data. In such attacks, 78 percent were considered low or very low difficulty, while in espionage-related attacks, the degree of difficulty climbs to 22 percent overall and 26 percent in attacks against large organizations.
The time from compromise to data exfiltration is longer only because espionage attackers require more time pivot between network resources, and find and exploit vulnerable systems before they’re able to move data to a command and control server. From the data, 84 percent of compromises are achieved within hours, and in 69 percent of breaches, data is moving off the network within hours.
Unfortunately for the victims, only 9 percent of breaches are discovered within hours. It’s taking months to years for most network intrusions to be discovered; 62 percent of breaches are found within months, 4 percent in years.
“Let’s stop treating [detection and response] like a backup plan if things go wrong,” the DBIR says, “and start making it a core part of the plan.”
Once discovered, most breaches are contained within days or weeks (76 percent), leaving a quarter to be contained within months or longer.
There is some tempered good news in that while 70 percent of breaches were discovered by third parties, down from 92 percent last year, detection capabilities seem to still be lacking within IT organizations. Another win is that external notification by organizations with no business relationship to the victim such as ISPs, and industry watchdog groups, climbed to 34 percent of breaches in cases of espionage. Fraud detection services and customer and law enforcement notification lead the way for financially motivated attacks.
In an attempt to better evade detection, cybercriminals are increasingly configuring their command and control infrastructure in such a way that initial malware callbacks communicate with a server located in the same country as the newly infected machines.
This emerging trend is among the vast and varied findings of a FireEye report, “The Advanced Cyber Attack Landscape,” made public this morning. FireEye gathered the data in the report in an analysis of some 12 million messages communicated between various malware targeting enterprises and their command and control servers.
The creation and proliferation of malware is more global than ever, with C&C servers living in 184 countries. That’s a substantial 42 percent increase from 2010, when only 130 nations played host to C&C servers.
While the breadth and quantity of such servers is changing dramatically, much remains the same: parts of South and Eastern Asia and areas near Eastern Europe are still the international cybercrime hotspots. China, South Korea, India, Japan, and Hong Kong are believed to be responsible for 24 percent of cyberattacks, while Russia, Romania, Poland, Ukraine, Kazakhstan, and Latvia accounted for 22 percent. The caveat to FireEye’s claim that these regions are driving the majority of advanced attacks is that their analysis showed that 44 percent of C&C servers are actually located in North America. This, FireEye believes, is a statistical anomaly reflecting the new reality that attackers are evading detection more and more by distributing the C&C servers in close proximity to their targets.
In fact, North America’s 44 percent share of these servers and its more drastic 66 percent share of C&C servers responsible for advanced persistent threat-style attack campaigns is an indicator of something that has not changed according to FireEye: relatively speaking, the U.S. corporate landscape, particularly its wealth of high technology firms, is densely packed with valuable intellectual property, and therefore attackers continue targeting companies based there. However, forensic analysis of the tools used in these attacks and the communication tactics of the C&C infrastructure supporting them revealed that the vast majority of attacks – and as many as 89 percent of APT tools, most of them related to Gh0stRAT – originated in China where they were developed by Chinese hacker groups.
Another evolution is a move toward the use of social sites like Facebook and Twitter to communicate with infected machines. This tactic and another whereby attackers embed stolen content in commonly used JPG files are deployed by attackers in an attempt to make malicious traffic seem benign.
Other interesting findings highlighted by FireEye are that South Korean businesses, mostly because of that country’s incredibly developed Internet infrastructure, are witnessing the highest level of callbacks per organization. Their findings also suggest that Japan’s density of intellectual property may rival that of the U.S., considering that 87 percent of callbacks originate and stay in that country. Lastly, high exit-rate detection in both the U.K and Canada suggests to FireEye that attackers are generally unconcerned about being detected in those countries.
Optimism and praise followed last week’s Java critical patch update. Oracle not only patched 42 vulnerabilities in the Java browser plug-in, but also added new code-signing restrictions and new prompts warning users when applets are potentially malicious. It took less than a week, however, to deflate any good will toward Java that resulted.
Noted Java bug hunter Adam Gowdiak, founder and CEO of Security Explorations of Poland, said this week that he reported to Oracle a new Reflection API vulnerability that affects all Java versions, including 7u21 released last Tuesday.
“It can be used to achieve a complete Java security sandbox bypass on a target system,” Gowdiak wrote on the Full Disclosure mailing list on Monday. “Successful exploitation in a Web browser scenario requires proper user interaction (a user needs to accept the risk of executing a potentially malicious Java application when a security warning window is displayed).”
Attackers can exploit this vulnerability to achieve a complete Java security sandbox escape, Gowdiak said, adding that he also send proof-of-concept code to Oracle demonstrating an exploit. Gowdiak, who first reported vulnerabilities in the Reflection API a year ago, also said that this vulnerability is present in the server versions of the Java Runtime Environment, as well as in the JRE Plugin and JDK software.
“It’s been a year since then and to our true surprise, we were still able to discover one of the simplest and most powerful instances of Java Reflection API-based vulnerabilities,” Gowdiak said. “It looks like Oracle was primarily focused on hunting down potentially dangerous Reflection API calls in the ‘allowed’ class space. If so, no surprise [this issue] was overlooked.”
Gowdiak identified four Java components and APIs that are risk for exploit: Sun Microsystems’ implementation of the XSLT interpreter; Long Term Persistence of JavaBeans Components; RMI and LDAP (RFC 2713); and many SQL implementations.
“These are the APIs and Java components that could be potentially used as execution vectors for untrusted Java code in other than web browser environments,” he told Threatpost via email. “In other words, they have the potential to be abused for the exploitation of Java SE flaws.”
Last week’s Oracle patch update repaired many issues plaguing the platform. Of the 42 vulnerabilities patched in the update, all but three were remotely exploitable. A number of Java zero-day vulnerabilities and exploits have been the center of watering hole attacks and other high-profile website hacks.
The update also now requires any applets that execute at runtime on the browser be signed with a trusted certificate, and that all code will prompt the user for approval. The level of user interaction required depends on the potential risk involved, Oracle said. Oracle has color coded its user prompts; blue for apps signed by a trusted certificate, and yellow indicating an untrusted or expired certificate. Red text accompanies high-risk warnings that an applet could be a security risk.
“We are not sure if these warnings will help the platform,” Gowdiak said. “Java was supposed to provide a safe execution environment for untrusted, potentially harmful code. A dialog prompt warning a user about a security risk prior to the execution of an untrusted application basically denounces one of the main advantages of the platform: its security.”
Oracle also removed the low security settings in the Java Control Panel; users will no longer be able to opt out of the security features built into Java.
“The platform will not deny the execution of Java applications, however in high-risk scenarios the user is provided an opportunity to abort execution if they choose,” Oracle said in its advisory last week. “Future update releases may include additional changes to restrict unsafe behaviors like unsigned and self-signed applications.”
Dennis Fisher talks with Chris Hoff of Juniper Networks about his childhood scaring sheep on a farm in New Zealand, his early days hacking on the first wave of personal computers, his misadventures in a college computer lab and how he ended up as an itinerant security guy.
Image via Flickr user Myrcurial‘s photostream, Creative Commons
Targeted cyberespionage attacks have dominated discussions within the security community and outside of it from the mainstream media to the halls of the executive and legislative branches of government. But until now, discussions about attacks stemming from China that target intellectual property from engineering, manufacturing and military interests in the United States, have been anecdotal and one-off analyses of specific breaches.
The 2013 Verizon Data Breach Investigations Report (DBIR) has changed that. For the first time, the report has branched out and extensively quantified nation-state attacks motivated by espionage. This is a significant departure from previous editions of the report, which many consider to be the industry standard research on data breaches.
Released today, the report takes great pains to correlate threat actor motives and the data that is compromised. It also has a host of new contributors, now 19 in all, bringing fresh perspectives to the data set used to make up the bulk of the 60-plus page report. As has been the case with the past eight DBIRs, the data comes from paid forensic investigations carried out by Verizon’s RISK Team, in addition to contributions from law enforcement and computer emergency response teams worldwide, as well as industry groups, large consulting and services organizations, and the U.S. Secret Service.
The data in this year’s report comes from 621 breaches where data loss or disclosure was confirmed and 47,000 reported security incidents. Despite the new focus on espionage-related attacks, the report still does its customary deep dive into financially motivated attacks and comparing the tactics used by cybercriminals to those used by nation-state actors.
The report’s bevy of new contributors brought with them the most insightful data into attacks tied to China targeting intellectual property, which accounted for 19 percent of breaches.
“They all focus on something different,” said Jay Jacobs, one of the DBIR authors and a principal at Verizon. “You have to understand the research and information you want to pull out; that makes a difference in what you want to share. If you want to count the number of SQL injection attacks, that’s one thing. If you want to correlate that to industry and organization size, you have to expand your vision.”
The majority of data breaches still rely on the exploitation of weak or default credentials or stolen passwords. Hackers continue to blend hacking and malware to steal payment card information or to gain legitimate access to network resources to steal intellectual property. Most financially motivated attacks are opportunistic and rated as low difficulty, while those motivated by espionage use a combination of phishing emails and advanced malware to ramp up the difficulty of initial compromise and subsequent actions.
“The ‘I’m too small to be a target’ argument doesn’t hold water. We see victims of espionage campaigns ranging from large multinationals all the way down to those that have no IT staff at all,” the report says. “Lesson two is that some industries appear to be more targeted than others.”
Most attacks motivated by espionage target the manufacturing and transportation industries, while retail and food services lead the way for financially motivated actors. State-sponsored hackers covet not only secrets and internal organizational data, but system information.
“Most organizations have some form or proprietary or internal information they want kept private. Without this secret sauce, it’s hard to stay competitive,” the report says. “And because it’s a secret and competitively advantageous, others may want to steal that sauce. Thus, ‘who wants my sauce?’ is probably a better question than ‘am I a target of espionage?’”
The number of state-affiliated actors accounts for 21 percent of attacks, compared to 55 percent attributed to organized crime groups. While China accounts for the majority of state-affiliated espionage attacks (96 percent), Eastern European countries such as Romania, Bulgaria and Russian Federation countries account for the bulk of financial crimes targeting payment systems with commodity malware not found in espionage attacks. Attribution, Verizon says, isn’t based just on geolocation of IP addresses for example, but data from arrests and the use of particular tactics associated with known groups of attackers. Insiders, meanwhile, aren’t on the radar with 92 percent of attacks attributed to external sources, again, most of those coming from criminal groups. Insiders have a role in 14 percent of data breaches, most of that number resulting from non-malicious actions, including human error.
The use of malware hasn’t tapered off. Espionage-related attacks, for example, account for a spike in the use of malicious email attachments as part of phishing campaigns. Phishing has become the initial entry point in many financial attacks too, in addition to direct compromise of a point-of-sale system or ATM machine. Malware used in espionage attacks, however, has very different goals than financially motivated attacks. Malware used to spy on organizations enables prolonged access to systems, control of those systems, and the ability to capture and exfiltrate data.
Spyware, keyloggers and RAM scrapers dominate the types of malware used in financially motivated attacks, while in espionage attacks, the threat actors are interested in a number of different things including grabbing screenshots of sensitive data. State-affiliated attackers are interested in maintaining persistence on machines and want to install backdoors in order to move data and install more malware such as downloaders, password dumpers and rootkits.
“Throughout this process, attackers promulgate across the systems within the network, hiding their activities within system processes, searching for and capturing the desired data, and then exporting it out of the victim’s environment,” the report says.
Hacking remains the most popular way attackers are infiltrating organizations, primarily through the use of stolen credentials. In financially motivated attacks, hackers will brute force attacks to steal weak credentials, or socially engineer them. Organized crime groups behind financially motivated attacks again made payment card data the most sought-after data type; that in addition to identity information can most quickly be turned into cash. In espionage attacks, stolen credentials are used to set up backdoor connections and then shell services such as SSH or RPC are used to pivot internally to different network resources. Similarly, Web-based desktop sharing services such as RDP and VNC are favorites for financially motivated attackers.
Given the number of new data sources, this year’s DBIR branches out in a number of new directions. With the number of high-profile espionage attacks gaining more attention, i.e., attacks on the New York Times, Apple, Facebook, Twitter and a number of government an activist organizations, organizations now have more insight into attacks that rely on more than social engineering and commodity malware.
“We’re seeing a diverse set of data that we can analyze,” Verizon’s Jacobs said. “We’re getting more views into breach data and seeing a diversity in threat actors and motives.”
As Twitter continues to secure its footing in the social network spectrum, it continues to be complemented by an ongoing deluge of spam and malware, intent on tapping into – and duping – the social network’s 200 million plus users.
Tanya Shafir, a researcher at the security firm Trusteer recently discovered a new type of new malware being used by cybercriminals to infect otherwise legitimate Twitter accounts.
According to a post by Director of Product Marketing Dana Tamir on the company’s blog today, the malware is “an active configuration of TorRAT” and is spreading via man-in-the-browser attacks.
Trusteer spotted the malware posting a series of tweets about everything from Beyonce to the Netherlands’ king, Willem-Alexander on some users’ accounts. Each tweet was accompanied with a suspicious link – which while not inspected, Trusteer assumes is a malicious website that likely leads to a drive-by download.
Malware like this has been seen before, but as Trusteer points out, it’s usually attempting to leverage users’ financial data by targeting their banking accounts and log-in credentials.
Twitter has done a good job at curbing spammy and malicious tweets as of late but at one point last year some accounts were sending over 150,000 malicious tweets at a time. Now the site allows users to report unwanted tweets as spam and block users who are blatantly peddling questionable content.
If you’ve ever sat in on a cybersecurity hearing on Capitol Hill or attended a security conference , then you’re no doubt familiar with the oft-preached need for information sharing and private-public partnerships. So frequently repeated are these refrains that they’re almost as meaningless as the acronym “APT.”
However, the security firm Group-IB and the Russian government’s cybersecurity investigatory unit, Department K, claim to have curbed the theft of a billion rubles by doing just that: sharing information and partnering.
Russia’s largest bank, Sberbank of Russia, suspected that someone was attacking its online banking operation and reached out to Group-IB to carry out a forensic analysis of its networks. Group-IB determined that the attacker was stealing money from the bank’s customers by circumventing its SMS-based payment verification feature.
In the end, the Russian cybersecurity police known as Department K used information provided by Group-IB and Sberbank of Russia to arrest an unnamed 40-year-old man from the Volga River city of Togliatti. According to Group-IB, the prolific Russian cybercriminal exploited the online banking systems of various Russian banks in order to perform more than 5,000 fraudulent transactions from as far back as August 2011.
Group-IB’s analysis determined that the attacker, who has been since arrested, deployed the popular the Carberp malware against his targets. The perpetrator of the attack campaign installed the Carberp Trojan on the machines of Sberbanks’ unknowing online customers. The malware then used Web-injection functionality to display spoofed banking pages to users on infected systems. In this way, users willingly submitted their banking log-in information and cell phone numbers into web forms that appeared to come from their bank, but actually communicated back to the attacker. Using this information, the man managed to clone his victims’ SIM cards and bypass SMS-based mobile payment confirmations.
“The investigation of this case — from the first moment when Group-IB received a complaint from a victim to when the perpetrator was apprehended — was conducted in record time, in less than six months. Thus, we managed to prevent thefts from Russian banks on the amount of 1 billion Roubles ($34 Million)” said Group-IB CEO, Ilya Sachkov. “This was the first case investigated within the European Cyber Security Federation (ECyFed) union, which includes Group-IB, CyberDefcon, and CSIS.”
*Image of Sberbank of Russia bank in Krasnodar, Russia via Helen Flamme‘s Flickr photostream
Details have been disclosed about vulnerabilities exploited in Chrome and Java during the Pwn2Own contest. Google made patches available for the Chrome flaw within 24 hours, while Oracle patched Java fully last week.
Details were not disclosed by the researchers, who netted tens of thousands for their exploits, until last Friday, more than a month after the contest.
The exploits in question here used a variety of techniques to break both the popular browser and the browser plug-in. Java has had a particularly miserable year in terms of security, starting shortly after Christmas with a number of zero-day exploits used high profile targeted attacks. Chrome, meanwhile, remains a difficult challenge for researchers and hackers alike. Not only is it a popular target during Pwn2Own, but Google runs a concurrent Pwnium event during the CanSecWest Conference challenging researchers to take a crack at the browser.
MWR Labs researchers were able to take down an up-to-date version of Chrome running on a fully patched Windows computer during the contest. Not only did they find and exploit a previously unknown flaw in Chrome, but were able to chain that together with a kernel exploit targeting Windows to elevate privileges and own the browser.
Meanwhile, James Forshaw, of Context Information Security of London, was able to break Java with an exploit for CVE-2013-1488, a vulnerability in the java.sql.DriverManager class, a trusted part of the Java framework, he wrote in a blogpost on Friday. This part of Java, he said, is used to access relational databases.
“Within the source code for this class, a Java vulnerability hunter would be drawn to the two AccessController.doPrivileged blocks like a moth to a flame,” he said. “They allow the Java libraries to temporarily elevate its privileges to perform a security critical action.”
Oracle released Java 7u21 last week with security patches that repaired all of the vulnerabilities exploited at Pwn2Own. Forshaw’s exploit enabled a sandbox bypass by repurposing unrelated code to ultimately disable the security manager and run malicious code as trusted. He said Oracle does not rate this flaw as critical because of the work involved, but a determined, persistent attacker could find success.
“That is also why I think something like Java can never be secured against hostile code running within a sandboxed environment,” Forshaw said. “The attacker has too much control to craft the environment to exploit the tiniest of weaknesses. The large amount of trusted code bundled with the JRE just acts to amplify that power.”
MWR Labs researchers, in turn, had to get equally creative to exploit the vulnerabilities in Chrome and beat the browser’s sandbox, as well as Chrome’s use of address space randomization layout (ASLR). The exploit targeted a WebKit vulnerability, which was the browser’s rendering engine, as well as a kernel overflow vulnerability in Windows, the underlying operating system.
The WebKit bug occurred in the way it handled viewing targets in Scalable Vector Graphics documents. SVG files support animation and interactive features on websites. MWR said in a blogpost it was able to specify a viewTarget for an SVG document and embed non-SVG elements inside a document.
“It is very difficult to secure such a complex piece of software, which frequently deals with untrusted input,” MWR said. “Even with modern exploit mitigation techniques and the inclusion of a sandboxed renderer processes, these protection mechanisms can be circumvented by exploiting the underlying operating system.”
Thousands of U.K. business computers have been infected by espionage malware using a custom protocol to communicate with its command and control servers. Researchers at Israeli security company Seculert added that the malware is still percolating with a number of capabilities yet to be deployed.
The custom protocol has another unique element to it, in that it always initiates communication with a command that includes the string “some_magic_code1” as an authenticator. After an initial connection over HTTP, the interaction changes to the custom protocol and additional instructions are fed to infected machines.
Seculert CTO Aviv Raff said the malware, in one example, was instructed to add a new user to the infected system with a user name of WINDOWS and a password of MyPass1234 which would be used to give the attacker remote access to the compromised machine.
“This ‘magic malware’ — as we’ve dubbed it — is active, persistent and had remained undetected on the targeted machines for the past 11 months,” Raff wrote on the company’s blog.
Custom protocols used by malware to communicate with a remote server have part of some high-profile targeted attacks, including the one on RSA Security in 2011. In this case, targets in a number of U.K. industries, including financial services, education and telecommunications, have already been hit by the malware, which is capable of stealing data from compromised machines, enabling remote access for the attackers and hijacking Web browsing sessions.
“It can be used for espionage,” said Seculert CTO Aviv Raff in an email to Threatpost.
Raff said there are indications that the malware is still under development.
“We have seen several indication of features which are not yet implemented, and functions which are not yet used by the malware,” Raff said, adding that some of those features include the ability to open a browser on the victim machine via an RDP session.
“The missing and unused features are more technical. e.g. creating new processes under an impersonated user or parsing XML files,” Raff added.
Raff also said that Seculert cannot be certain how initial infections are happening.
“Currently, we don’t know the exact infection vector. But, because of the small presence of the dropper on the infected machine, it seems to be some sort of an exploit (spear phishing or drive-by download),” Raff said.
“As the malware is capable of setting up a backdoor, stealing information, and injecting HTML into the browser, we believe that the current phase of the attack is to monitor the activities of their targeted entities,” Raff added. “But, because this malware is also capable of downloading and executing additional malicious files, this might be only the first phase of a much broader attack.”
FireEye experts have been tracking the Operation Beebus campaign for a few months now, and their latest research suggests that whomever is responsible for the attacks is ultimately interested in stealing drone technology-related secrets.
Operation Beebus is an APT-style attack campaign targeting government agencies in the United States and India as well as numerous aerospace, defense, and telecom industry organizations. The attackers are targeting these groups with a yet unseen backdoor-Trojan called Mutter that exploits known vulnerabilities.
The domains and command and control servers running this campaign are located all over the world and FireEye believes that the infamous Comment Crew is responsible for the campaign. Comment Crew is the same group that Mandiant recently uncovered as APT 1, a secret unit of China’s People’s Liberation Army tasked with hacking into and stealing information from international companies and governments.
In at least one case, FireEye observed a spear phishing attack that deployed a malicious attachment masquerading as a document containing details about the Pakistani military’s advances in drone technology. The document is attributed to Aditi Malhotra, an Associate Fellow at the Centre for Land Warfare Studies (CLAWS) in New Delhi. Malhorta is apparently a real person with writings that can be found online, but it is not clear if she actually wrote the document or if the attackers are just using her name. A second document is all mixed up, with a contact email from Andrews Air Force Base in Maryland and a physical address in Pakistan. Other documents used are either blank or contain unreadable characters.
Interestingly, the malware is making use of an evasion technique similar to one deployed by those that attacked South Korean banks and broadcasters last month. In essence, the attackers designed the malware so that it delays execution and remains inactive on host systems for as long as possible. The idea here, FireEye researcher James Bennett explains, is that if the malware waits long enough, then the scanner will give up on its analysis and pass the malware off as benign software. In this way, the malicious software is better at avoiding the dynamic detection methods deployed by most malware scanners.
Bennett claims that Operation Beebus is designed to pilfer all sorts of information related to air-, sea-, and land-based drone technology. Bennett says that he has seen the campaign attempt to steal research, design, and manufacturing specifications for drone vehicles and subsystems from more than 20 target organizations. At least one of those targets was, according to Bennett, an academic institution receiving military funding for its unmanned vehicle research.
The Mutter Backdoor itself, which is among the common threads across the entire Operation Beebus campaign, comes in two varieties. Both are DLL droppers. You can read the technical details along with the rest of the FireEye analysis here.
The next shoe has fallen in an effort to force wireless carriers and handset makers to provide regular security updates to Android mobile devices. The American Civil Liberties Union filed a complaint this week with the U.S. Federal Trade Commission accusing four leading carriers of deceptive business practices and knowingly selling defective phones to consumers and businesses.
ACLU principal technologist and senior policy analyst Christopher Soghoian brought the issue to light earlier this year the Kaspersky Lab Security Analyst Summit where he said millions of Android devices were multiple versions in arrears and vulnerable to not only attacks on their personal digital information, but potentially physical attack as well.
In the complaint written by Soghoian, the ACLU asks the FTC to investigate Verizon, AT&T, TMobile and Sprint Nextel, adding that the carriers’ reluctance to patch security vulnerabilities in Android phones is a deceptive and unfair business practice. Further, the ACLU requested that the FTC force carriers to warn customers about unpatched vulnerabilities, allow customers with vulnerable phones to escape their contracts without early termination penalties, and provide that customers may exchange at no cost their phones for another that receives regular security updates, or return the phone for a full refund.
The FTC came down hard on mobile hardware manufacturer HTC in late February, when a settlement was reached after a complaint was filed against HTC America charging them with putting the security and privacy of customers at risk by failing to provide regular security patches to Android devices. HTC, at significant costs, will have to not only develop at release patches, but establish a program that injects security into its development processes, submit to security assessments for 20 years and provide adequate security training for its developers.
It’s hard to tell what happens next with the ACLU complaint, Soghoian said.
“Now we wait. If the FTC decides to investigate, we won’t know about it until the investigation is over and a settlement is reached,” he said. “That could take a year or two. That is frustrating for outsiders, but that is just how the FTC does business.”
Threatpost reached out to all four carriers in question for comment. AT&T and Sprint Nextel never replied. TMobile spokesperson Glenn Zaccara provided a statement to Threatpost that said the company provides regular security updates to Android customers.
“We provide regular and frequent OS updates as well as maintenance releases for a variety of improvements, including security related improvements,” Zaccara said, adding that the most recent OS upgrades provided to customers were sent out April 8 when Samsung Galaxy S II and Samsung Galaxy Tab 2 users were upgraded to Jellybean (Android 4.1.2).
Android 4.1.2 was released by Google last Oct. 9, more than a month before Google released Android 4.2; 4.2.2 was released Feb. 11, meaning that current users remain releases behind.
“So in response to our complaint about slow updates, T-Mobile is citing a recent software update for 2 particular handsets, enabling users to upgrade to a version of Android that was released by Google in October of 2012,” Soghoian said. “How exactly does this make T-Mobile look good?”
Ars Technica did a detailed study on Android handset updates, and the numbers aren’t pretty for the four carriers in question here, as well as for a number of handset makers. Verizon, AT&T and TMobile sometimes took up to 13 months to provide updates, while many models from all four carriers never receive a second update.
A Verizon statement said: “We work closely with our OEM partners and provide mandatory updates to devices as quickly as possible, giving attention and priority to ensuring a good and secure customer experience. We will review the complaint when it is filed with the FTC.”
The ACLU complaint is 17 pages long and goes into detail on the influence carriers have in terms of which features manufacturers are to include in smartphones, including carrier-specific apps and the removal of certain features, such as tethering capabilities, that would threaten the carriers’ revenue stream, the complaint said.
For context, the complaint cited numbers from ComScore Reports that 53 percent of smartphones used by consumers are Android devices, and that 70 percent of devices sold in the fourth quarter of 2012 were Android based. In addition, the complaint said that Google statistics show only two percent of Android devices are running the latest version of the OS, 4.2.x. Meanwhile, Android 2.3 (Gingerbread), released in 2011, is on 40 percent of Android devices, according to Google’s developer dashboard.
“The slow rate of adoption of the most recent versions of Android does not reflect a failure by consumers to seek out and install operating system updates,” Soghoian wrote in the complaint. “Instead, it reflects the fact that for most Android smartphones in use, updates to the most recent version of the operating system simply have not been made available for consumers to install.”
Android malware, meanwhile, is an extraordinary problem. Research done by Kaspersky Lab indicates that 99 percent of mobile malware targets Android because of its open source nature and the ease of which attackers can get malicious applications up on the Google Play store. The level of vetting, for example, does not match that of Apple’s App Store.
“Widely distributed Android malware has exploited known security vulnerabilities in the Android operating system for which fixes from Google existed, but which the vast majority of consumer devices had not received at the time of infection,” the complaint said. “The wireless carriers have failed to warn consumers that the smartphones sold to them are defective, that they are running vulnerable software, and that other smartphones are available that receive regular, prompt updates to which consumers could switch. “
Two elders of information security came to Source Boston 2013 Wednesday morning to encourage the next generation to grab the torch from them and to urge great caution in diving too deeply into specialization.
Heavy thinkers Dan Geer and Richard Thieme said that the industry is closing in on an end of an era where practitioners soon will no longer come to security from a variety of backgrounds, bringing along with them lessons learned in other disciplines.
“We’re close to a transitional end because people can get degrees and certifications, and security is becoming institutionalized,” said Thieme, a former clergyman who has a literature background.
Geer, who has a bio-medical background, says he thinks about security in terms of disease models, much in the way a civil engineer would apply their knowledge of bridge construction to security or a physician would think in terms of triage.
“Any background that requires you to think [applies to security],” Geer said. “That’s what makes this field fascinating. This is truly a renaissance field. While you can, I think you should steal this mind-view from us. Steal from us before we are replaced by a leading expert on one cubic inch of the security manual.”
Geer and Thieme are true historians and observers of technology and security, and both are still making an impact. Geer is CISO of In-Q-Tel, which is a venture capital firm that operates on behalf of the intelligence community looking for innovative security technologies to bankroll. Thieme, meanwhile, continues to contribute articles to the community and is a frequent speaker at industry events. He has spoken at every DefCon, for example, since 1996. As moderator Josh Gorman said, Geer and Thieme represent the left brain and right brain of the industry, Geer its scientist and mathematician, and Thieme the hacker culture’s conscience and source of ethics.
Geer’s fascination with metrics and measuring security outcomes has made his reputation. As an indicator of the beginning of the end of security generalists, he shared details of a project he conducted where he plotted over a 21-year period the number academic articles in computer security literature and the number of times those works were cited. Looking at what he called the half-life of these articles, he plotted how long it took for articles to be cited a 50th time, and arrived at the conclusion that while the number of authors is rising, the average half-life of an article is falling.
“I think that’s an unarguable marker for specialization,” Geer said. “I can’t recommend anyone to be a generalist. Be a serial specialist, but I don’t think it’s possible to start from scratch and be a broad-spectrum generalist.”
Thieme said this dynamic is also true for citations in the medical field, which weakens the level of institutional knowledge.
“Masters of their domains are not familiar with their history,” Thieme said. “They are specialized to the point where true dialog between people is difficult because common points of reference are not there.”
Richard Thieme image via Jason Scott