Both the home/personal online offerings of Microsoft Outlook (e.g., Outlook.com, Office 365 Home, or Office 365 Personal) and the professional Office 365 offerings (e.g., as part of Office 365 Advanced Threat Detection) might rewrite links in received emails with the goal of protecting users against certain threats (e.g., phishing).
For various reasons, one might to rewrite these “safelinks” back into their original form.
The script unsantize-safelinks does exactly this. This can, for example, be used for displaying mails nicely in mutt or other text-based mail programs. In your “.muttrc” you need to add/edit the following configuration:
set display_filter="unsanitize-safelinks"
If you want to also rewrite the links when using tools such as urlscan, use:
macro index,pager \cb "<pipe-message> unsanitize-safelinks| urlscan<Enter>"
And the following trick rewrites the links prior to editing a message (e.g., when replying):
set editor ="unsanitize-safelinks -i %s && $EDITOR %s"
Finally, if links should be rewritten when viewing the HTML-part, you need to
edit your .mailcap
entry for type text/html
:
text/html; unsanitize-safelinks -i --html %s && /usr/bin/sensible-browser %s; description=HTML Text; nametemplate=%s.html
The project is licensed under a 2-clause BSD license and available at: https://git.logicalhacking.com/adbrucker/unsanitize-safelinks.
]]>Luckily, an increasing number of publishers allows authors of (academic) papers to publish a pre-print of their accepted papers on their personal website or their institutional website. This eases access to those papers significantly, as the “official” version on the publishers’ website is often behind a paywall. Most publishers require that the pre-prints published by the author contain a statement referring to the official version.
Thus, the only remaining question is: how to produce a pre-print containing this reference with as little effort as possible. If you are using LaTeX for writing your papers, authorarchive package might be the solution.
Adding the self-archiving note to a paper formatted with Springer’s LNCS style is as easy as adding
\usepackage[LNCS,
key=brucker-authorarchive-2016,
year=2016,
publication={Anonymous et al. (eds). Proceedings of the International
Conference on LaTeX-Hacks, LNCS~42. Some Publisher, 2016.}
startpage={42},
doi={00/00_00},\_00},
doiText={0/00
nocopyrightauthorarchive} ]{
to the preamble of your paper. The package also supports advanced features such as adding bibliographic entries (e.g., for BibTex) into the final PDF.
The LaTeX package “authorarchive” is a LaTeX style for producing author self-archiving copies of (academic) papers. It is available on CTAN and development versions are available in the authorarchive git repository. The package is dual-licensed under a 2-clause BSD-style license and/or the LPPL version 1 or any later version.
Apache Cordova is a widely used framework for writing mobile apps that follows the “hybrid app” paradigm. A hybrid app is a mobile app that is partly implemented in platform-neutral HTML5/JavaScript and partly in platform specific languages (e.g., Java or Objective C).
Static (data flow) analysis of hybrid apps that supports the analysis of both the platform independent and the platform specific parts in a unified way (e.g., for finding injection attacks) is an unsolved problem.
The main problem with statically analyzing Cordova apps is that many vulnerabilities in Cordova applications exploit data flows that cross the boundary between HTML/JavaScript and native code. Thus, a static tool should be able to analyze these cross-language data flows.
There are, in principle, three ways for implementing a static analysis statically of cross-language data-flows of Cordova apps:
We consider the second approach a good compromise between thoroughly
analyzing all possible cross-language data flows and performance (respectively,
repetitively scanning the same code). We implemented this approach in a
prototype and its evaluation shows
that it reliably detects cross-language data flows in Cordova application. For
more details, have a look at our ESSoS
2016 paper
[1].
The question if FLOSS (Free/Libre and Open-Source Software) is more or less secure than proprietary software is often not the right question to ask. The much more important question is: How to integrate FLOSS components securely into a Secure Software Development Process? Moreover, if you think about it, the potential challenges in the secure integration of FLOSS components are also challenges integrating other types of third-party components. As a software vendor you are finally responsible for the security of the overall product, regardless which technologies and components where used in building it (you can either read more, or watch the video of our AppSecEU presentation).
Ideally, third party components should, security-wise, be treated as your own code and, thus, they impact the all aspects of the Secure Software Development Lifecycle.
Before we continue, let;s quickly review the three most important types of third party components:
Freeware is ubiquitous, i.e., easy available to developers without triggering formal processes. Thus, it is the most problematic one as its use is usually hard to track, and it is usually hard to get fixes or updates in a timely manner (or any maintenance guarantees). FLOSS is also easily available – but it does not have the maintenance problem, as you could fix it yourself and there are also a plenty of companies offering support for FLOSS components. Thus, when you are tracking the use of FLOSS (as well as the use of proprietary third-party components) in your organization, proprietary and FLOSS components differ mainly in one aspect: FLOSS, by definition, provides you with the possibility to fix issues yourself (or ask an arbitrary third part to do it for you).
Let’s face the truth: any third party component (as any self-developed code) can contain vulnerabilities that need to be fixed during the lifecycle of the consuming applications. Thus, instead of asking which type of components is more secure (answer: neither, there is bad and good software in both camps), it is more important to control/plan the risk and effort associated with consuming third party components.
Thus, FLOSS just provides you one additional opportunity; fixing the issues yourself. Moreover, when doing research in software security, FLOSS has the additional advantage that data about software versions, vulnerabilities and fixes is available that can be used for validating research ideas. For example, we are researching methods
We published already preliminary results [1] and we are expecting much more to come in the (near) future.
Of course, one would also like to precisely predict the risk (or the likelihood that vulnerabilities are detected in a specific third-party component during the maintenance period of the consuming applications). Sadly, our research shows that this is not (easily) possible and, again, is wrong question to ask.
Let’s get back to some pragmatic recommendations if you are using third-party components in general and FLOSS components in particular as part of your software development. As we cannot predict future vulnerabilities easily, we focus on strategies for controlling the risk and effort – which should be, anyway, the main focus of a good project manager.
To control (minimize) the risk of third party components we recommend integrating the management of third-party components in your Secure Software Development Lifecycle right from the start and to obtain them from trustworthy sources (and, if you are in the lucky situation to be able to select a component from various components providing the necessary functionalities, we have some tips as well):
Prefer projects with private bug trackers: Being able to report security issues to a FLOSS project in privately allows you to discuss potential fixes with the community without putting your customers or all other customers of the FLOSS component at risk (e.g., by inadvertently publishing a 0-day).
Prefer projects with a mature (healthy) Secure Development Lifecycle: As nobody is immune from security vulnerabilities, it is important to select project that take security seriously. A good indicator is the maturity level of the Secure Software Development Lifecycle, e.g., by answering such as
To control (minimize) the effort of third party components, again, the Secure Software Development Lifecycle is the most important part to look at – followed by the project selection.
More and more (mobile) apps are written in Apache Cordova (or its proprietary variants such as PhoneGap or SAP Kapsel). Apache Cordova is a framework that allows to easily write (mobile) apps for many different platforms using a hybrid development approach, i.e., combining web development technologies (HTML5 and JavaScript) with native development techniques such as Java or Objective C.
Combining web and native technologies creates new security challenges as, e. g., an XSS attacker becomes more powerful. For example, a XSS vulnerability might allow an attacker to access the calendar of a device or delete the address book.
On the one hand, Cordova apps are HTML5 applications, i.e., they share all typical features (e.g., JavaScript code that is downloaded at runtime) and security risks (e.g., XSS) of web applications. On the other hand, Cordova apps share the features (e.g., full device access) and security risk (e.g., SQL injections, privacy leaks) of native apps.
To limit the typical web application threats, WebViews (which execute the HMTML5/JavaScript part of a Cordova app) are re-using the well-known security mechanism from web browsers such as the same-origin policy. Moreover, WebViews are separated from the regular web browsers on Android, e.g., WebViews have their own cache and cookie store. Still, there are subtle differences that make implementing secure Cordova apps even for experienced web application developers a challenge.
A plugin is a mechanism for drilling holes into the sandbox of a WebView, making the traditional web attacker much more powerful as, e.g., an XSS attack might grant access to arbitrary device features. The root cause for such vulnerabilities can be located in Cordova itself (e.g., CVE-2013-4710 or CVE-2014-1882 or in programming and configuration mistakes by the app developer.
Do not forget that Cordova apps are web applications, thus, you need to
And keep in mind, that the WebView sandbox is not as protective as it is in modern desktop browsers.
Cordova apps are native (Java, Objective C, Swift, .net, …) apps and, thus, you need to apply the best practices of native development, such as:
Cordova apps are mobile apps, and you need to use the security features of the mobile platform correctly, e.g.,
Finally, Cordova apps are Cordova apps:
Finally, did you know that
application android:debuggable="true" /> <
on Android disables the certificate checks in WebViews.
]]>