The Leadership Laboratory recently posted a book review of The Art of Software Security Assessment. The book raises a number of issues that we would love to explore further and the authors, Mark Dowd, John McDonald and Justin Schuh have graciously agreed to an interview. One section was titled Code Auditing and the Development Life Cycle and we used that as the basis of the interview.
Ryan
Barnett, Director of Application
Security Training at Breach Security, Inc. talks with Stephen Northcutt about the current state of web application security.
Dinis Cruz, Director of Advanced Technology for Ounce Labs, talks with
Stephen
Northcutt about the many facets of OWASP, as well as the important
questions that need real answers in order to develop secure web
applications.
Brian Chess, Chief Scientist for Fortify Software, talks with Stephen Northcutt about static analysis and other web application security solutions.
Stephen Northcutt interviews Caleb Sima about the development of Caleb's company, SPI Dynamics, and the increasing need for solutions for web application security.
An interview with David Hoelzer describing DAD, an open source Windows
event log and syslog management tool that allows you to aggregate logs
from hundreds to thousands of systems in real time.
Ce qui est toujours plaisant lors de montée de version d’un outil, restera toujours la découverte des nouveautés et ensuite … le recherche délicate des fonctions d’origine … sans parler des fonctions désormais absentes … A chaque migration, l’outil ...
Il est vraiment difficile d’arriver à suivre toutes les nouveautés et autres informations diverses et variés sur SharePoint et Office 2016, On peut passer à coté de certaines …pas d’une nouveauté mais plutôt d’une disparition, d’une feature deprecate...
The lower layers in the modern computing infrastructure are written in languages threatened by exploitation of memory management errors. Recently deployed exploit mitigations such as control-flow integrity (CFI) can prevent traditional return-oriented programming (ROP) exploits but are much less effective against newer techniques such as Counterfeit Object-Oriented Programming (COOP) that execute a chain of C++ virtual methods. Since these methods are valid control-flow targets, COOP attacks are hard to distinguish from benign computations. Code randomization is likewise ineffective against COOP. Until now, however, COOP attacks have been limited to vulnerable C++ applications which makes it unclear whether COOP is as general and portable a threat as ROP. This paper demonstrates the first COOP-style exploit for Objective-C, the predominant programming language on Apple’s OS X and iOS platforms. We also retrofit the Objective-C runtime with the first practical and efficient defense against our novel attack. Our defense is able to protect complex, real-world software such as iTunes without recompilation. Our performance experiments show that the overhead of our defense is low in practice.
Modeling relation paths has offered significant gains in embedding models for knowledge base (KB) completion. However, enumerating paths between two entities is very expensive, and existing approaches typically resort to approximation with a sampled subset. This problem is particularly acute when text is jointly modeled with KB relations and used to provide direct evidence for facts mentioned in it. In this paper, we propose the first exact dynamic programming algorithm which enables efficient incorporation of all relation paths of bounded length, while modeling both relation types and intermediate nodes in the compositional path representations. We conduct a theoretical analysis of the efficiency gain from the approach. Experiments on two datasets show that it addresses representational limitations in prior approaches and improves accuracy in KB completion.
Time-travel debugging (TTD) lets developers step backward as well as forward through a program’s execution. TTD is a powerful mechanism for diagnosing bugs, but previous approaches suffer from poor performance due to checkpoint and logging overhead, or poor fidelity because important information like GUI state is not tracked. In this paper, we describe how to provide highperformance and high-fidelity TTD to programs written in managed languages. Previous high-performance debuggers treat components external to the program like the GUI as black boxes, but that is not sufficient for highfidelity time-travel. Instead, we advocate for a gray-box approach that keeps these components live and in sync with the program during time-travel. The key insight is that managed runtime APIs expose most of the functionality required to do this; where it does not, we extend the runtime with a small number of non-intrusive interrogative interfaces. To demonstrate the power of our gray-box approach, we implement ReJS, a time-traveling debugger for web applications. ReJS imposes imperceptible tracing overhead, and its logs typically grow less than 1 KB/s. As a result, ReJS is performant enough to be deployed in the wild; real client machines can ship buggy execution traces across the wide area to developer-side machines for debugging.
We present fast and compact implementations of FourQ (ASIACRYPT 2015) on field-programmable gate arrays (FPGAs), and demonstrate, for the first time, the high efficiency of this new elliptic curve on reconfigurable hardware. By adapting FourQ's algorithms to hardware, we design FPGA-tailored architectures that are significantly faster than any other ECC alternative over large prime characteristic fields. For example, we show that our single-core and multi-core implementations can compute at a rate of 6389 and 64730 scalar multiplications per second, respectively, on a Xilinx Zynq-7020 FPGA, which represent factor-2.5 and 2 speedups in comparison with the corresponding variants of the fastest Curve25519 implementation on the same device. These results show the potential of deploying FourQ on hardware for high-performance and embedded security applications. All the presented implementations exhibit regular, constant-time execution, protecting against timing and simple side-channel attacks.
We describe VisFlow, a system that efficiently analyzes the feeds from many cameras. Ubiquitous camera deployments are widely used for security, traffic monitoring, and customer analytics. However, existing methods to analyze the video feeds in real-time or post-facto do not scale and are error-prone. Our key contributions are two-fold. Surveillance video is hard to analyze because it has low-resolution, many objects per frame, varying light, etc. By leveraging the fixed perspective of surveillance cameras, we show that typical vision tasks can be performed with high accuracy. Next, to efficiently process many feeds, we use a relational dataflow system. We observe that (i) even vision queries that seem different have common parts (e.g., background subtraction and feature extraction), (ii) often neither camera-level or frame-level parallelism lead to good executions, and (iii) the best execution plans vary with input size. By extending query optimization techniques, VisFlow computes efficient execution plans for vision queries, parallelizing as needed. Evaluation on traffic videos from a large city on complex vision queries shows many fold improvements in accuracy, query completion time and resource usage relative to existing systems.
Helping citizens to resolve grievances is an important part of many e-governance initiatives. In this paper, we examine two contemporary initiatives that use ICTs to help citizens resolve grievances in central India. One system is a state-run call center (the CM Helpline), while the other is an independent citizen journalism service (CGNet Swara). Despite similarities in their high-level goals, approach, and geographies served, the systems have key differences in their use of technology, their level of transparency, and their relationship to government. Using qualitative interviews, field immersions, and other data, we analyze how these differences impact the experiences of citizens, officials, and the intermediaries between them. We synthesize our observations into a set of recommendations for the design of future ICT-enabled grievance redressal systems.
During the first decade of the 21 st century, the rise of mobile feature phones in India saw the development of both an economy of informal media exchange and a culture of active media sharing for entertainment. Mobile phone owners paid for pirated movies and music on the grey market, and they traded them with one another, even using poorly designed mechanisms such as Bluetooth file exchange. In this paper, we update what is known about the dynamic mobile media sharing culture through qualitative interviews conducted with low- and lower-middle-class participants in and around Bangalore. We find that with the increasing penetration of smartphones and data packs, media sharing has not only continued, but has blossomed into a rich and varied range of activity in which mobile owners display sophisticated knowledge and behaviors. Our participants deftly juggle multiple media devices, mobile handsets, SIM cards, storage devices, mobile applications, and cloud services as a way to navigate issues of cost, file size, data bandwidth, physical proximity, and social engagement styles. We consider our findings in the context of domestication and amplification theories of technology.
Developers deploying web applications in the cloud often need to determine how changes to service tiers or runtime load may affect user-perceived page load time. We devise and evaluate a systematic methodology for exploring such “what-if” questions when a web application is deployed. Given a website, a web request, and “whatif” scenario, with a hypothetical configuration and runtime conditions, our methodology, embedded in a system called WebPerf, can estimate a distribution of cloud latency of the request under the “what-if” scenario. In achieving this goal, WebPerf makes three contributions: (1) automated instrumentation of websites written in an increasingly popular task asynchronous paradigm, to capture causal dependencies of various computation and asynchronous I/O calls; (2) an algorithm to use the call dependencies, together with onlineand offline-profiled models of various I/O calls to estimate a distribution of end-to-end latency of the request; and (3) an algorithm to find the optimal measurements to take within a limited time to minimize modeling errors. We have implemented WebPerf for Microsoft Azure. In experiments with six real websites and six scenarios, the WebPerf’s median estimation error is within 7% in all experiments.
We introduce a latent activity model for workplace emails, positing that communication at work is purposeful and organized by activities. We pose the problem as probabilistic inference in graphical models that jointly capture the interplay between latent activities and the email contexts they govern, such as the recipients, subject and body. The model parameters are learned using maximum likelihood estimation with an expectation maximization algorithm. We present three variants of the model that incorporate the recipients, co-occurrence of the recipients, and email body and subject. We demonstrate the model’s effectiveness in an email recipient recommendation task and show that it outperforms a state-of-the-art generative model. Additionally, we show that the activity model can be used to identify email senders who engage in similar activities, resulting in further improvements in recipient recommendation.
The majority of the world’s languages have little to no NLP resources or tools. This is due to a lack of training data ("resources") over which tools, such as taggers or parsers, can be trained. In recent years, there have been increasing efforts to apply NLP methods to a much broader swath of the world’s languages. In many cases this involves bootstrapping the learning process with enriched or partially enriched resources. We propose that Interlinear Glossed Text (IGT), a very common form of annotated data used in the field of linguistics, has great potential for bootstrapping NLP tools for resource-poor languages. Although IGT is generally very richly annotated, and can be enriched even further (e.g., through structural projection), much of the content is not easily consumable by machines since it remains "trapped" in linguistic scholarly documents and in human readable form. In this paper, we describe the expansion of the ODIN resource—a database containing many thousands of instances of IGT for over a thousand languages. We enrich the original IGT data by adding word alignment and syntactic structure. To make the data in ODIN more readily consumable by tool developers and NLP researchers, we adopt and extend a new XML format for IGT, called Xigt. We also develop two packages for manipulating IGT data: one, INTENT, enriches raw IGT automatically, and the other, XigtEdit, is a graphical IGT editor.