Algorithmic trading ...

 

rise_of_the_machines_-_algorithmic_trading_in_the_foreign_exchange_market.pdf

We study the impact of algorithmic trading in the foreign exchange market using a long time series of high-frequency data that specifically identifies computer-generated trading activity. Using both a reduced-form and a structural estimation, we find clear evidence that algorithmic trading causes an improvement in two measures of price efficiency in this market: the frequency of triangular arbitrage opportunities and the autocorrelation of high-frequency returns. Relating our results to the recent theoretical literature on the subject, we show that the reduction in arbitrage opportunities is associated primarily with computers taking liquidity, while the reduction in the autocorrelation of returns owes more to the algorithmic provision of liquidity. We also find evidence that algorithmic traders do not trade with each other as much as a random matching model would predict, which we view as consistent with their trading strategies being highly correlated. However, the analysis shows that this high degree of correlation does not appear to cause a degradation in market quality. Number of Pages in PDF File: 42
 

dynamical_models_of_market_impact_and_algorithms_for_order_execution.pdf

In this review article, we present recent work on the regularity of dynamical market impact models and their associated optimal order execution strategies. In particular, we address the question of the stability and existence of optimal strategies, showing that in a large class of models, there is price manipulation and no well-behaved optimal order execution strategy. We also address issues arising from the use of dark pools and predatory trading. Number of Pages in PDF File: 23
 

automated_trading_with_genetic-algorithm_neural-network_risk_cybernetics_-_an_application_on_fx_mark.pdf

Recent years have witnessed the advancement of automated algorithmic trading systems as institutional solutions in the form of autobots, black box or expert advisors. However, little research has been done in this area with sufficient evidence to show the efficiency of these systems. This paper builds an automated trading system which implements an optimized genetic-algorithm neural-network (GANN) model with cybernetic concepts and evaluates the success using a modified value-at-risk (MVaR) framework. The cybernetic engine includes a circular causal feedback control feature and a developed golden-ratio estimator, which can be applied to any form of market data in the development of risk-pricing models. The paper applies the Euro and Yen forex rates as data inputs. It is shown that the technique is useful as a trading and volatility control system for institutions including central bank monetary policy as a risk-minimizing strategy. Furthermore, the results are achieved within a 30-second timeframe for an intra-week trading strategy, offering relatively low latency performance. The results show that risk exposures are reduced by four to five times with a maximum possible success rate of 96%, providing evidence for further research and development in this area.

 

a_boosting_approach_for_automated_trading.pdf

This paper describes an algorithm for short-term technical trading. The algorithm was tested in the context of the Penn-Lehman Automated Trading (PLAT) competition. The algorithm is based on three main ideas. The first idea is to use a combination of technical indicators to predict the daily trend of the stock, the combination is optimized using a boosting algorithm. The second idea is to use the constant rebalanced portfolios within the day in order to take advantage of market volatility without increasing risk. The third idea is to use limit orders rather than market orders in order to minimize transaction costs.
 

algorithmic_contracts.pdf

Algorithmic contracts are contracts in which one or more parties use an algorithm as a negotiator to choose which terms to offer or accept, or as a gap-filler, allowing the parties to explicitly agree to the results of an algorithm as part of a contract. Such agreements are already an important part of today’s economy. Areas where algorithmic contracts are already common are high speed trading of financial products and dynamic pricing in consumer goods and services. However, contract law doctrine does not currently have an approach to evaluating and enforcing algorithmic contracts. This Article fills this significant gap in doctrinal law and legal literature.

This article provides a taxonomy of algorithmic contracts. This task is required because different types of algorithmic contracts present different challenges to contract law. While many algorithmic contracts are readily handled by standard contract doctrine, some require additional interpretive work. Algorithms can be employed in contract formation as either mere tools or artificial agents. This distinction is based on the predictability and complexity of the decision-making tasks assigned to the algorithm. Artificial agents themselves can be clear box, when inner components or logic are decipherable by humans, or black box, where the logic of the algorithm is functionally opaque. While courts and policy makers should be mindful of the specific characteristics of algorithmic contracts in their interpretation and enforcement, traditional contract law provides adequate tools to address most algorithmic contracts.

The algorithmic contracts that present the most significant problems for current contract law are those that involve black box algorithmic agents choosing contractual terms on behalf of one or more parties. The classical interpretation of contract doctrine, which justifies contract as an expression of human will, finds that these algorithmic contracts are not properly formed at law and thus cannot be enforced in contract. This is because where algorithms serve as quasi-agents to principals in making decisions the principals have not manifested the intent to be bound at the level of specificity that contract law requires. Algorithms are not persons, and so cannot consent beyond the scope of the principal’s manifested objectives, as true agents can. Furthermore, policy considerations of efficiency and fairness in light of technological trends also supports presumptive exclusion of black box algorithmic contracts from contract law.

However, even some black box contracts may be enforceable. This Article proposes a model for determining whether such agreements may be enforced. The approach evaluates the fit between the black box algorithm’s actions and the objectively manifested intent of the party using it to determine whether a contract can be implied. This approach draws inspiration from and contributes to the literature on artificial agents and implied-in-fact contract doctrine. Where a contract cannot be implied, restitution law and tort law allow justice to be done as between the parties. This offers a predictable approach to the enforcement of black box algorithmic contracts at law while promoting efficiency and fairness concerns in a manner traditional contract law cannot. Common law courts and state legislatures should update their approach to algorithmic contracts. The American Law Institute and other groups that seek to promote best practices in state private law should update contract and commercial law statements to expressly address algorithmic contracts. Businesses should strengthen their positions in negotiations as well as in court by clarifying their objectives in using algorithms. Giving businesses the incentive to make their objectives clear will aid in ascribing liability in all areas of law and promote responsible use of algorithms.
Files:
 

trading_strategies_within_the_edges_of_no-arbitrage.pdf

We develop a trading strategy which employs limit and market orders in a multi-asset economy where the assets are not only correlated, but can also be structurally dependent. To model the structural dependence, the midprice processes follow a multivariate reflected Brownian motion on the closure of a no-arbitrage region which is dictated by the assets' bid-ask spreads. We provide a formal framework for such an economy and solve for the value function and optimal control for an investor who takes positions in these assets. The optimal strategy exhibits two dominant features which depend on how far the vector of midprices is from the no-arbitrage bounds. When midprices are sufficiently far from the no-arbitrage edges, the strategy behaves as that of a market maker who posts buy and sell limit orders. And when the midprice vector is close to the edge of the no-arbitrage region, the strategy executes a combination of market orders and limit orders to profit from statistical arbitrages. Moreover, we discuss a numerical scheme to solve for the value function and optimal control, and perform a simulation study to discuss the main characteristics of the optimal strategy.
 

quantitative_models_of_commercial_policy.pdf

What tariffs would countries impose if they did not have to fear any retaliation? What would occur if there was a complete breakdown of trade policy cooperation? What would be the outcome if countries engaged in fully efficient trade negotiations? And what would happen to trade policy cooperation if the world trading system had a different institutional design? While such questions feature prominently in the theoretical trade policy literature, they have proven difficult to address empirically, because they refer to what-if scenarios for which direct empirical counterparts are hard to find. In this chapter, I introduce research which suggests overcoming this difficulty by applying quantitative models of commercial policy.
 

stochastic_optimization_in_recursive_equation_systems_with_random_parameters_with_an_application_to_.pdf

A promising approach to decision making with econometric models has been developed by Holt and Theil who postulate a quadratic utility (or loss) function in the criteria variables.1 Provided the model is linear with Inown coefficients, the optimal policy is found to be one for which the criterion function is an extremurm2 Since, in practice. the coefficients of a model are not known the technique utilizes the mean values of the coefficient estimators and for this reason it is known as the certainty equirah'nce approach.
 
Breakthroughs in computing hardware, software, telecommunications and data analytics have transformed the financial industry, enabling a host of new products and services such as automated trading algorithms, crypto-currencies, mobile banking, crowdfunding and robo-advisors. However, the unintended consequences of technology-leveraged finance include firesales, flash crashes, botched initial public offerings, cybersecurity breaches, catastrophic algorithmic trading errors and a technological arms race that has created new winners, losers and systemic risk in the financial ecosystem. These challenges are an unavoidable aspect of the growing importance of finance in an increasingly digital society. Rather than fighting this trend or forswearing technology, the ultimate solution is to develop more robust technology capable of adapting to the foibles in human behaviour so users can employ these tools safely, effectively and effortlessly. Examples of such technology are provided.
 
We introduce a new method of optimising the accuracy and time taken to calculate risk for a complex trading book, focusing on the use case of XVA. We dynamically choose the number of paths and time discretisation to target computational effort on calculations that give the most information in explaining the PnL of the book. The approach is applicable to both fast, accurate intraday pricing calculations as well as large batch runs. The results are demonstrated by application to a large XVA book, which demonstrates speed-ups comparable to those available via adjoint algorithmic differentiation, for a fraction of the implementation cost.
 
When automated trading strategies are developed and evaluated using backtests on historical pricing data, there exists a tendency to overfit to the past. Using a unique dataset of 888 algorithmic trading strategies developed and backtested on the Quantopian platform with at least 6 months of out-of-sample performance, we study the prevalence and impact of backtest overfitting. Specifically, we find that commonly reported backtest evaluation metrics like the Sharpe ratio offer little value in predicting out of sample performance (R² < 0.025). In contrast, higher order moments, like volatility and maximum drawdown, as well as portfolio construction features, like hedging, show significant predictive value of relevance to quantitative finance practitioners. Moreover, in line with prior theoretical considerations, we find empirical evidence of overfitting – the more backtesting a quant has done for a strategy, the larger the discrepancy between backtest and out-of-sample performance. Finally, we show that by training non-linear machine learning classifiers on a variety of features that describe backtest behavior, out-of-sample performance can be predicted at a much higher accuracy (R² = 0.17) on hold-out data compared to using linear, univariate features. A portfolio constructed on predictions on hold-out data performed significantly better out-of-sample than one constructed from algorithms with the highest backtest Sharpe ratios.
Reason: