During the last week of September the antimalware industry got together in one of the oldest and most legendary information security conferences in the world, the 24th Virus Bulletin International Conference (VB2014), held in the beautiful Seattle, USA. Kaspersky Lab was there to present and share a wide range of ongoing research topics with the security community.
In the first day of the conference we were shown over and over how the Linux operating it's not so malware free any more. Dismantling the myth, we had several talks on the topic, amongst them "Ebury and CDorked. Full disclosure" and "Linux-based Apache malware infections: biting the hand that serves us all" brought attention to non-traditional malware, and how the Apache web server is caught in the middle of this *nix world, becoming an efficient platform for attacking and infecting unsuspecting clients.
My colleague Santiago Pontiroli presented about the current "bitcoin bonanza" and how cybercrime is quickly targeting cryptocurrencies and their users. While sharing some of the most interesting malware samples that target bitcoin and other alternative currencies, the audience got an overview of the benefits that digital currencies offer to Latin American countries and the reasons behind criminals' activity.
The icing on the first day's cake was the presentation shared by Patrick Wardle who covered "Methods of malware persistence on Mac OS X", again showing us that not everything in the malware ecosystem is about Microsoft.
With so many good talks to attend in the second day, sometimes making the right decision was rather difficult. A very interesting presentation by Jérôme Segura, regarding Technical Support Scams, demonstrated in detail how to build a honeypot to catch these scammers while emphasizing the importance of user awareness and education.
I presented a one year research about the attacks against "boletos", an old and very popular payment system from Brazil based in printed documents and a barcode, showing how local bad guys have adapted their trojans to change them, redirecting payments to their accounts, and stealing millions of dollars in the process.
It was the turn for my colleague David Jacoby to present an extremely funny (yet informative) presentation on how he hacked his own home, exploiting different vulnerabilities on networked devices such as Smart TVs, printers, NAS, etc. Interactively demonstrating how exposing these devices to attacks would mean compromising an entire home network, all the presentation was displayed with funny GIFs and (interestingly enough) the slides were hand crafted with MS Paint.
Security Researchers from Microsoft gave us a run down on .NET malware analysis with their last minute paper ".NET malware dynamic instrumentation for automated and manual analysis". As malware developers are increasingly relying on high level programming languages for their malicious creations, tools like the one presented in this talk will become essential for malware analysts looking to become proficient in .NET malicious applications study.
And the last Kaspersky presentation was from Vicente Diaz on "OPSEC for security researchers". Working as a security researcher nowadays is not an easy task, especially now that we no longer deal only with technical aspects. The global picture of the security landscape these days features new actors including governments, big companies, criminal gangs and intelligence services. That puts researchers in some tricky situations.
The closing panel was funny and informative, with David Jacoby bringing awareness to the community on how disclosure of important vulnerabilities (like Heartbleed, and now the infamous Shellshock) should be handled, and what roles do vendors play in this scenario. After the keynote address by Katie Moussouris of HackerOne on "Bounties and standards and vuln disclosure, oh my!", the final panel left us with a cohesive feeling for the conference, bringing into the spotlight what the industry as a whole should be facing in terms of vulnerabilities disclosure and the same challenges we had to protect connected devices, the Internet of Things, crypto currencies and payment systems.
Times change but the same challenges remain, one thing is clear, we are still here to protect the user and fight against cybercrime.
Being a security researcher nowadays is no easy task, especially as we are no longer dealing with purely technical matters. Today's global security landscape includes several new actors including governments, big companies, criminal gangs and intelligence services. This puts researchers in a difficult situation.
According to one of many definitions of OPSec:
"Operational security identifies critical information to determine if friendly actions can be observed by adversaryintelligence systems"
We are hearing reports of researchers facing threats from criminal gangs, or being approached by state intelligence services. Others have found themselves under surveillance or had their devices compromised when on the road.
How can we minimize these risks? What can we do to avoid leaking information that could put us in an uncomfortable situation in the future?
Sometimes we are the public faces of a research project, but at other times we don't want to be in a visible position.
The golden rule in Operational Security is using silence as a defensive discipline. If you don't really need to say something, then keep quiet. When you need to communicate with someone, do it in a secure way that doesn't compromise the content of your message and, if possible, doesn't generate metadata around it.
This is an incredibly difficult objective to accomplish: it's a natural instinct to want to impress others and on many occasions we will face adversaries who are well trained in obtaining the information that they want. We all like to tell interesting stories.
The second golden rule is that OPSec does not work retrospectively, so we should very careful about what we are doing now if we don't want it to come back and bite us in the future.
In terms of OPSec, every security analyst should aspire to being just another guy in the line. If we attract too much attention to ourselves, surveillance could easily escalate beyond electronic means – and that is basically game over. In today's world of massive surveillance, standing out will alert the attention of anyone who can access the relevant data. And in today's world of information leakage and "big internet companies", it´s difficult to know exactly who has access to which data:
(example of data leaked from an aggregator and published as a service)
There are some interesting examples of how anomalies have been detected from metadata and then successfully used in investigations (http://en.wikipedia.org/wiki/Abu_Omar_case). And then there is the routine application of this in mass surveillance and data mining.So what can we do?
The first rule of implementing OPSec is don´t try to accomplish more than you can. The fact is bad OPSec might be worse than no OPSec at all.
The main feature needed for effective OPSEC is not technical, but psychological: be meticulous, and maintain a healthy level of paranoia.
However electronic surveillance is obviously much more common and every bit of information will be there forever. Let´s look at our minimum toolset to avoid leaking information and thin about some basic tips.Encryption
Obviously we should use as much encryption as possible. But remember that there is an inherent weakness. Once your keys are compromised, all the info that was encrypted in the past is compromised with them. As time passes, the likelihood of your keys being compromised will grow. So it's much better to use IM with OTP.
Today's big question: what is happening with TrueCrypt, the most popular encryption software?
According to the Audit project, there is no obvious flaw or backdoor. However a couple of months ago we saw this:
There are still many open questions, but you can find a trusted TrueCrypt repository at: https://github.com/AuditProject/truecrypt-verified-mirrorEmail
Email simply leaves too much metadata, even when the message is encrypted with PGP (by the way, use keys bigger than 2048). IM with OTP is better.
External providers cannot be trusted.IM
Pidgin and Adium seem to be ok. But remember not to log your chats and don't overlook the non-technical factor: you don´t know who is on the other side of the conversation (even when you have verified the key).TOR
I'd definitely recommend using an anonymizing network to shake off most of the groups that could track you. However it cannot be considered truly "secure" in the sense that most of output nodes are controlled by people that can correlate their logs with the source of the connection. We saw an example of this in the Harvard bomb:
Also TOR has been the target of many attack attempts, like this recent one:
So don´t blindly trust TOR for anything very sensitive, but use it for your daily activities. Never reveal your true IP.Telephone
A total nightmare in terms of OPSec. The simple recommendation is to get rid of it! But this won't happen.
At least don´t do anything sensitive with it, instead use burner phones, and don´t use them at home or work.Conclusions
Perfect OPSec is almost impossible. However implementing basic OPSec practices should become second nature for every researcher. Once you internalize the need to apply OPSec you will be more careful and hopefully, avoid rookie mistakes like talking too much and bragging about your research.
The most important things, beyond any tool, are being meticulous, applying the right level of OPSec according to your situation and understanding what you can actually hope to achieve.
This is just a brief introduction to a complex topic, but we hope it could be a useful eye-opener, especially for our fellow security researchers.
In almost any company the IT security department faces two priority tasks: ensuring that critical systems operate continuously and reducing the risk of attacks on the corporate network. One of the most effective approaches to both these problems is to restrict the privileges of system users.
In terms of IT security, critical systems have two basic properties - integrity and availability - that affect their operational continuity. To protect a corporate network from attacks it is necessary to reduce the attack surface by reducing the number of devices and network services available from outside the corporate network and by protecting the systems and services that require such access (web services, gateways, routers, workstations, etc.). The main vector of attack on a corporate network is the user computers connected to the Internet on that network.
Theoretically, to protect critical systems from unauthorized changes and reduce the possibility of attacks on the corporate network, you should:
- specify those objects (equipment, systems, business applications, valuable documents, etc.) on the corporate network that require protection;
- describe the company's business processes and use those to help determine the levels of access to the protected objects;
- ensure that each subject (a user or a corporate application) has a unique account;
- limit subjects' access to objects, i.e. to restrict the rights of the subjects within the business processes;
- ensure that all operations between the subjects and the objects are logged and the logs are stored in a safe place.
In practice, it works more like this:
- All corporate documents are stored centrally in shared folders on one of the servers of the company (for example, on the Document Controller server)
- access to critical systems is denied to everybody but administrators - any administrator - can log into the system remotely to quickly repair any failure
- Sometimes administrators use a "shared" account
- All employees have limited privileges as a 'standard user' but on request anyone can get local administrator rights.
Technically, it is much easier to protect critical systems than workstations: changes in business processes are rare, regulations vary little and can be drawn up to account for even the smallest details. By contrast the users' work environment is chaotic, their processes change rapidly and the protection requirements change along with them. In addition, many users are suspicious of any restrictions, even when there is no impact on workflow. Therefore, the traditional protection of users is based on the principle 'it is better to miss malicious software than to block something really important'.
Last year, Avecto conducted a study called "2013 Microsoft Vulnerabilities Study: Mitigating Risk by Removing User Privileges" and concluded that "by removing local administrator rights it is possible to reduce the risk of exploitation of 92% of critical vulnerabilities in Microsoft software". The conclusion seems logical but it should be noted that Avecto did not test vulnerabilities; it only analyzed data from the Microsoft Vulnerability Bulletin 2013. Nevertheless, it is clear that malicious software running without administrator rights cannot install a driver, create/modify files in protected directories (% systemdrive%,% windir%,% programfiles%, etc.), change system configurations (including writing to the HKLM registry hive) and most importantly - cannot use privileged API functions.
In reality, though, the lack of administrator rights is not a serious obstacle for either malicious software or a hacker penetrating into the corporate network. Firstly, any system has dozens of vulnerabilities that open up the necessary rights up to kernel level privileges. Secondly, there are threats which only require standard user privileges to be implemented. The diagram below shows possible attack vectors that do not require any administrator rights. Let's have a closer look at them.
With only standard user privileges, the attacker gets full access to the memory of all processes running under the user account. This is enough to integrate malicious code into processes in order to remotely control the system (backdoor), to intercept keystrokes (keylogger), to modify the content in the browser, etc.
Since most antivirus programs can control attempts to implement unknown code in the processes, attackers often use more secretive methods. Thus, an alternative method applied to implement a backdoor or a keylogger in the browser process is to use plugins and extensions. Standard user privileges are enough to download a plugin, and that code can do almost everything a fully-featured Trojan is capable of. That includes remotely controlling the web browser, logging data entries in browser traffic, interacting with web services and modifying page content (phishing).
Fraudsters are also interested in standard office applications (such as email and IM-clients) which can be used to attack other network users (including phishing and social engineering). Scammers can access programs like Outlook, The Bat, Lync, Skype, etc. via API and local services of such applications as well as by injecting code into the relevant processes.
Of course it's not just applications that are of value to fraudsters; the data stored on the PC is also a potential goldmine. In addition to corporate documents, attackers often look for different application files containing passwords, encrypted data, digital keys (SSH, PGP), etc. If the user's computer has the source code, attackers could try to implement their code into it.Domain attacks
Since the accounts of most corporate users are domain accounts, the domain authentication mechanisms (Windows Authentication) provide the user with access to various network services on a corporate network. This access is often provided automatically without any additional verification of the username and password. As a result, if the infected user has access to the corporate database, attackers can easily take advantage of it.
Domain authorization also allows attackers to access all network folders and disks available to the user, share internal resources via the intranet and sometimes evenaccess other workstations on the same network segment.
In addition to network folders and databases, the corporate network often includes various network services such as remote access, FTP, SSH, TFS, GIT, SVN, etc. Even if dedicated non-domain accounts are used to access these services, attackers can easily utilize them while the user is working on his computer (i.e. during an active session).Protection
It is almost impossible to provide high level of protection for workstations by denying users administrative rights. Installing antivirus software on a workstation will increase its security but won't solve all problems. To achieve high security levels, Application Control technology should consist of three key elements:
- Default Deny, which only allows the installation and running of software that has been approved by the administrator. In this case, the administrator does not have to put each individual application (hash) on the list of trusted software. There is a wide variety of generic tools available to enable dynamic whitelisting of all software signed by an approved certificate, created by an approved developer, obtained from a trusted source or contained in the Whitelisting database of a security software provider.
- Application Control that can restrict the work of trusted applications according to their functions. For example, for normal operation the browser should be able to create network connections but it does not need to read/write other processes in the memory, connect to online databases or store files on the network.
- Update management that ensures all software on workstations is updated promptly, reducing the risk of infection via update mechanisms.
In addition, specific products which feature Application Control can provide a range of useful functions based on this technology: inventory, control over software installed on the network, event logs (which will be useful in the case of incident investigation), etc.
On the one hand, the combination of technologies can provide users with everything they need for work and even for entertainment and is flexible enough to deal with changing requirements. On the other hand, the chances of an attacker gaining access to the protected system are extremely limited. No doubt, this is the best balance between flexibility and security in protecting a corporate network.