Saturday, November 3, 2012

Tower of Babel, the Semantics Initiative, and Ontology

What's in a name? That which we call a rose by any other name would smell as sweet. ~ William Shakespeare 
The beginning of wisdom is to call things by their right names. ~ Chinese Proverb

At a symposium held by the Securities Industry and Financial Markets Association (SIFMA) in March 2012, Andrew G. Haldane, Executive Director of Financial Stability for the Bank of England, gave a speech titled, “Towards a common financial language”.[1] Using the imagery of the Tower of Babel, Mr. Haldane described how…
Finance today faces a similar dilemma. It, too, has no common language for communicating financial information. Most financial firms have competing in-house languages, with information systems silo-ed by business line. Across firms, it is even less likely that information systems have a common mother tongue. Today, the number of global financial languages very likely exceeds the number of global spoken languages.

The economic costs of this linguistic diversity were brutally exposed by the financial crisis. Very few firms, possibly none, had the information systems necessary to aggregate quickly information on exposures and risks.[2] This hindered effective consolidated risk management. For some of the world’s biggest banks that proved terminal, as unforeseen risks swamped undermanned risk systems.

These problems were even more acute across firms. Many banks lacked adequate information on the risk of their counterparties, much less their counterparties’ counterparties. The whole credit chain was immersed in fog. These information failures contributed importantly to failures in, and seizures of, many of the world’s core financial markets, including the interbank money and securitization markets.

Why is this? One would think that the financial industry would be in a great position to capitalize on the growth of digital information. After all, data has been the game changer for decades. But Wall Street, while proficient at handling market data and certain financial information, is not well prepared for the explosion in unstructured data.

The so-called “big data” problem of handling massive amounts of unstructured data is not just about implementing new technologies like Apache Hadoop. As discussed at the CFTC Technology Advisory Committee on Data Standardization held September 30, 2011, there is significant confusion in the industry regarding “semantics”.[3]

EDM Council - FIBO Semantics Initiative

The “semantic barrier” is a major issue in the financial industry, necessitating the creation of standards such as ISO 20022 to resolve.[4] For example, what some participants in the payments industry call an Ordering Customer, others refer to a Payer or Payor, while still others refer to a Payment Originator or Initiator. The context also plays a role here: the Payment Originator/ Initiator is a Debtor/ Payor in a credit transfer, while that Payment Originator/Initiator is a Creditor/Payee in a direct debit.[5]

It should therefore be apparent that intended use of systems is reliant on “human common sense” and understanding. Unfortunately, especially within the context of large organizations or across an industry, boundaries of intended use are often not documented and exist as “tribal knowledge”. Even if well documented, informal language maintained in policies and procedures can result in unintentional misapplication, with consequences no less hazardous than intentional misapplication.

Overcoming semantic barriers...

If your avocation[6] involves organizing information and/or modeling data and systems you invariably start asking epistemo-logical[7] questions, even though such questions may not be immediately practical to the task at hand: What is knowledge? How is knowledge acquired? To what extent is it possible for a given concept, either physical or abstract, to be known? Can computers understand meaning from the information they process and synthesize knowledge? “Can machines think?”[8]

Such questions are impetus to an ongoing debate about “the scope and limits of purely symbolic models of the mind and about the proper role of connectionism in cognitive modeling.” Harnad (1990) proffered such quandary as the Symbol Grounding Problem. “How can the semantic interpretation of a formal symbol system be made intrinsic to the system, rather than just parasitic on the meanings in our heads?”[9] The meaning triangle[10] illustrates the underlying problem.

Figure 1 – Ogden and Richards (1923) meaning triangle 

Figure 1 is a model of how linguistic symbols stand for objects they represent, which in turn provide an index to concepts in our mind. Note too, that such triangle represents the perspective of only one person, whereas communication often takes place between two or more persons (or devices such as computers). Hence, in order for two people or devices to understand each other, the meaning that relates term, referent and concept must align.

Now consider that different words might refer to the same concept, or worse, the same word could have different meanings, as in our example of the term “orange”. Are we referring to a fruit or to a color? This area of study is known as semantics.[11]

Relating semantics to ontology

As the meaning triangle exemplifies, monikers—whether they be linguistic[12] or symbolic[13]—are imperfect indexes. They rely on people having the ability to derive denotative (ie, explicit) meaning, and/or connotative (ie, implicit) meaning from words/signs. If the encoder (ie, sender) and the decoder (ie, receiver) do not share both the denotative and connotative meaning of a word/sign, miscommunication can occur. In fact, at the connotative level, context determines meaning.

Analytic approaches to this problem falls under the domain of semiotics,[14] which for our purposes encompasses the study of words and signs as elements of communicative behavior. Consequently, we consider linguistics and semiosis[15] to come under the subject of semiotics.[16] Semiotics, in turn, is divided into three branches or subfields: (i) semantics; (ii) syntactics;[17] and (iii) pragmatics.[18]

Various disciplines are used to model concepts within this field of study. These disciplines include, but are not necessarily limited to, lexicons/synsets, taxonomies, formal logic, symbolic logic, schema related to protocols (ie, syntactics), schema related to diagrams (ie, semiosis), actor-network theory, and metadata [eg, structural (data about data containers), descriptive (data about data content)]. In combination, these various methods form the toolkit for ontology work.

Ontology involves the study of the nature of being, existence, reality, as well as the basic categories of being and their relations. It encompasses answering metaphysical[19] questions relating to quiddity, that is, the quality that makes a thing what it is—the essential nature of a thing.

Admittedly, there are divergent views amongst practitioners as to what constitutes ontology, as well as classification of semiotics and related methodologies. To be sure, keeping all these concepts straight in one’s mind is not without difficulty for those without formal training. Further, “ontology has become a prevalent buzzword in computer science. An unfortunate side-effect is that the term has become less meaningful, being used to describe everything from what used to be identified as taxonomies or semantic networks, all the way to formal theories in logic.”[20]

Figure 2 is a schematic diagram illustrating a hierarchical conceptualization of ontology, its relation to epistemology, metaphysics, and semiotics, as well as its relation to cognitive science. Figure 2 also shows how semiotics encompasses linguistics and semiosis.

Figure 2 – Conceptualization of epistemology, metaphysics, ontology, and semiotics

Knowledge representation and first order logic

Knowledge representation (KR) is an area of artificial intelligence research aimed at representing knowledge in symbols to facilitate systematic inferences from knowledge elements, thereby synthesizing new elements of knowledge. KR involves analysis of how to reason accurately and effectively, and how best to use a set of symbols to represent a set of facts within a knowledge domain.

A key parameter in choosing or creating a KR is its expressivity. The more expressive a KR, the easier and more compact it is to express a fact or element of knowledge within the semantics and syntax of that KR. However, more expressive languages are likely to require more complex logic and algorithms to construct equivalent inferences. A highly expressive KR is also less likely to be complete and consistent; whereas less expressive KRs may be both complete and consistent.

Recent developments in KR include the concept of the Semantic Web, and development of XML-based knowledge representation languages and standards, including Resource Description Framework (RDF), RDF Schema, Topic Maps, DARPA Agent Markup Language (DAML), Ontology Inference Layer (OIL), and Ontology Web Language (OWL).[21]

Figure 3 – Adapted from Pease (2011) [Figure 15] and Orbst (2012)

SUMO, an open-source declarative programming language based on first order logic,[22] resides on the higher end of the scale in terms of both formality and expressiveness. The upper level ontology of SUMO consists of ~1120 terms, ~4500 axioms and ~795 rules, and has been extended with a mid-level ontology (MILO) as well as domain specific ontologies. Written in the SUO-KIF language, it is the only formal ontology that has been mapped to all of WordNet lexicon.

Formal languages such as DAML, OIL, and OWL are geared towards classification. What makes SUMO unique from other types of modeling approaches (eg, UML or frame-based), is its use of predicate logic. SUMO preserves the ability to structure taxonomic relationships and inheritance, but then extends such techniques with an expressive set of terms, axioms and rules that can more accurately model geo-spatial, sequentially temporal concepts, both physical and abstract.

Nevertheless, KR modeling can suffer from the “garbage in, garbage out” syndrome. Developing domain ontologies with SUMO is no exception. That is why in a large ontology such as SUMO/MILO, validation is very important.[23]

Models of concepts are second derivatives

Returning to Ogden’s and Richards’ (1923) meaning triangle, the relations between term, referent and concept may be phrased more precisely in causal terms:
  • The matter (referent) evokes the writer's thought (concept). 
  • The writer refers the matter (referent) to the symbol (term). 
  • The symbol (term) evokes the reader's thought (concept). 
  • The reader refers the symbol (concept) back to the matter (referent).

When the writer refers the matter to the symbol, the writer is effectively modeling the referent. The method that is used is informal language. However, without a formal semantic system in which to model concepts, the use of natural language as a representation of concepts will suffer from the issue that informal languages have meaning only by virtue of human inter-pretation of words. Likewise, it is important to not confuse the term as a substitute for the referent itself.

In calculus the second derivative of a function ƒ is the derivative of the derivative of ƒ. Likewise, a KR archetype or replica of a referent (ie, a physical or abstract thing) can be considered a second derivative, whereby the concept is the first derivative, and the model of the concept is the second derivative. What can add to the confusion is the term labeling the referent, versus the term labeling the model of the concept of the referent. One's inclination is to substitute the label for the referent.

Figure 4 – Adapted from Sowa (2000), “Ontology, Metadata, and Semiotics” 

Thus, it is important to recognize that a representation of a thing is at its most fundamental level still a surrogate, a substitute for the thing itself. It is a medium of expression. In fact, “the only model that is not wrong is reality and reality is not, by definition, a model.”[24] Still, a pragmatic method for addressing this concern derives from development and use of ontology.

Unstructured data and a way forward…

Over the past two decades much progress has been made on shared conceptualizations and the theory of semantics, as well as the application of these disciplines to advanced computer systems. Such progress has provided the means to solve the problem of deriving meaningful information from the unstructured schema size explosion (ie, “big data”) overwhelming the banking industry, as well as the many other industries which suffer from the same issue.

The missing link underlying “the cause of many a failure to design a proper dialect... [is] the general lack of an upper ontology that could provide the basis for mid-level ontologies and other domain specific metadata dictionaries or lexicons.” The key then is use of an upper-level ontology that “gives those who use data modeling techniques a common footing to stand on before they undertake their tasks.”[25] SUMO, as an open source formal ontology (think Linux), is a promising technology for such purpose with an evolving set of tools and an emerging array of applications/uses solving real world problems.

As Duane Nickull, Senior Technology Evangelist at Adobe Systems explained, “[SUMO] provides a level setting for our existence and sets up the framework on which we can do much more meaningful work.”[26]

About SUMO:
The Suggested Upper Merged Ontology (SUMO) and its domain ontologies form the largest formal public ontology in existence today. They are being used for research and applications in search, linguistics and reasoning. SUMO is the only formal ontology that has been mapped to all of the WordNet lexicon. SUMO is written in the SUO-KIF language. Sigma Knowledge Engineering Environment (Sigma KEE) is an environment for creating, testing, modifying, and performing inference with ontologies developed in SUO-KIF (e.g., SUMO, MILO). SUMO is free and owned by the IEEE. The ontologies that extend SUMO are available under GNU General Public License. Adam Pease is the Technical Editor of SUMO.  

For more information: Also see: for list of research publications citing SUMO.

Users of SUO-KIF and Sigma KEE consent, by use of this code, to credit Articulate Software and Teknowledge in any writings, briefings, publications, presentations, or other representations of any software that incorporates, builds on, or uses this code. Please cite the following article in any publication with references:

Pease, A., (2003). The Sigma Ontology Development Environment. In Working Notes of the IJCAI-2003 Workshop on Ontology and Distributed Systems, August 9, 2003, Acapulco, Mexico.

[1] Speech by Mr Andrew G Haldane, Executive Director, Financial Stability, Bank of England, at the Securities Industry and Financial Markets Association (SIFMA) “Building a Global Legal Entity Identifier Framework” Symposium, New York, 14 March 2012.

[2] Counterparty Risk Management Policy Group (2008).

[3] CFTC Technology Advisory Subcommittee on Data Standardization Meeting to Publicly Present Interim Findings on: (1) Universal Product and Legal Entity Identifiers; (2) Standardization of Machine-Readable Legal Contracts; (3) Semantics; and (4) Data Storage and Retrieval. Meeting notes by Association of Institutional Investors. Source:

[4] See:

[5] SWIFT Standards Team, and Society for Worldwide Interbank Financial Telecommunication (2010). ISO 20022 for Dummies. Chichester, West Sussex, England: Wiley.

[6] The term “avocation” has three seemingly conflicting definitions: 1. something a person does in addition to a principal occupation; 2. a person's regular occupation, calling, or vocation; 3. Archaic diversion or distraction. Note: we purposefully selected this terms because it relate to pragmatics; specifically, the “semantic barrier”.

[7] e•pis•te•mol•o•gy, n., 1. a branch of philosophy that investigates the origin, nature, methods, and limits of human knowledge; 2. the theory of knowledge, esp the critical study of its validity, methods, and scope.

[8] Turing, A.M. (1950). “Computing machinery and intelligence” Mind, 59, 433-460.

[9] Harnad, Stevan (1990) “The Symbol Grounding Problem” Physica, D 42:1-3 pp. 335-346.

[10] Ogden, C., and Richards, I. (1923). The meaning of meaning. A study of the influence of language upon thought and of the science of symbolism. Supplementary essays by Malinowski and Crookshank. New York: Harcourt.

[11] se•man•tics, n., 1. linguistics the branch of linguistics that deals with the study of meaning, changes in meaning, and the principles that govern the relationship between sentences or words and their meanings; 2. significs the study of the relationships between signs and symbols and what they represent; 3. logic a. the study of interpretations of a formal theory; b. the study of the relationship between the structure of a theory and its subject matter; c. the principles that determine the truth or falsehood of sentences within the theory. 

[12] lin•guis•tics, n., the science of language, including phonetics, phonology, morphology, syntax, semantics, pragmatics, and historical linguistics.

[13] Use of term “symbolic” refers to semiosis and the term “sign,” which is something that can be interpreted as having a meaning for something other than itself, and therefore able to communicate information to the person or device which is decoding the sign. Signs can work through any of the senses: visual, auditory, tactile, olfactory or taste. Examples include natural language, mathematical symbols, signage that directs traffic, and non-verbal interaction such as sign language. Note: we categorized linguistics to be a subclass of semiotics.

[14] se•mi•ot•ics, n., the study of signs and sign processes (semiosis), indication, designation, likeness, analogy, metaphor, symbolism, signification, and communication. Semiotics is closely related to the field of linguistics, which, for its part, studies the structure and meaning of language more specifically.

[15] The term “semiosis” was coined by Charles Sanders Peirce (1839–1914) in his theory of sign relations to describe a process that interprets signs as referring to their objects. Semiosis is any form of activity, conduct, or process that involves signs, including the production of meaning. [Related concepts: umwelt, semiosphere]

[16] One school of thought argues that language is the semiotic prototype and its study illuminates principles that can be applied to other sign systems. The opposing school argues that there is a meta system, and that language is simply one of many codes (ie, signs) for communicating meaning.

[17] syn•tac•tic, n., 1. the branch of semiotics that deals with the formal properties of symbol systems. 2. logic, linguistics the grammatical structure of an expression or the rules of well-formedness of a formal system.

[18] prag•mat•ics, n. 1. logic, philosophy the branch of semiotics dealing with causalality and other relations between words, expressions, or symbols and their users. 2. linguistics the analysis of language in terms of the situational context within which utterances are made, including the knowledge and beliefs of the speaker and the relation between speaker and listener. [Note: pragmatics is closely related to the study of semiosis.]

[19] met•a•phys•ics, n., 1. the branch of philosophy that treats of first principles, includes ontology and cosmology, and is intimately connected with epistemology; 2. philosophy, especially in its more abstruse branches; 3. the underlying theoretical principles of a subject or field of inquiry.

[20] Pease, Adam. (2011). Ontology: A Practical Guide. Angwin: Articulate Software Press.

[21] A review of the listed KR approaches is outside the scope of this discussion. See Pease (2011), Ontology: A Practical Guide, ‘Chapter 2: Knowledge Representation’ for a more in-depth discussion/comparison.

[22] While OWL is based on description logic, its primary construct is taxonomy (ie, frame language).

[23] See Pease (2011), Ontology: A Practical Guide, pp. 89-91 for further discussion on validation.

[24] Haldane, Andrew G. (2009). “Why banks failed the stress test” Speech, Financial Stability, Bank of England, at the Marcus-Evans Conference on Stress-Testing, London, 9-10 February 2009.

[25] Duane Nickull, Senior Technology Evangelist, Adobe Systems; foreword to “Ontology” by Adam Pease.

[26] Multiple contributors (2009). Introducing Semantic Technologies and the Vision of the Semantic Web Frontier Journal, Volume 6, Number 7 July 2009. See: 

Monday, October 29, 2012

Dodd-Frank: Is smart politics, smart business?

It may be smart politics to fight Dodd-Frank, but is it smart business? Throughout the primary and general election season, Republicans have repeatedly invoked the law’s 848-page girth—and its rules on, among other things, trading derivatives and swaps—as a symbol of government overreach that is killing jobs.[1]

As noted by Michael Greenberger, professor at the University of Maryland's Francis King Carey School of Law, tactics used to try and stop Dodd-Frank include attempts at blocking its passage, starving regulators financially so the law cannot be enforced, and most recently, challenging the final rules with a flood of lawsuits in federal courts claiming that regulators have used improper cost-benefit analyses.[2]

There seems to be two major themes underlying Wall Street’s resistance. The first is the cost Dodd-Frank will impose on certain institutions’ existing business models, exposing these firms to either more competition, or rendering certain lucrative ways of doing business no longer viable. The second is the cost of implementing and/or upgrading technology to properly support the deluge of new requirements, some which contemplate the building of infrastructure that currently does not exist. 77 FR 21278 - Customer Clearing Documentation, Timing of Acceptance for Clearing, And Clearing Member Risk ...

The evidence of the fight to reduce Dodd-Frank’s impact on derivatives trading is scattered throughout the regulations promulgated by the CFTC. The final rules contain a summary of comments by industry participants and discussion of the CFTC's views in response. Take for example the discussion surrounding customer clearing documentation and trilateral agreements…

Six commentators [3] went into detail why trilateral agreements are bad for the markets, noting that such agreements discourage competition and efficient pricing, compromise anonymity, reduce liquidity, increase the time between execution and clearing, introduce conflicts of interest, and prevent the success of swap execution facilities (SEFs).

Opposing this view were many of the major banks [4] who contend that without the trilateral agreements some market participants may have reduced access to markets. The banks suggest that "instead of prohibiting trilateral agreements, the CFTC could require that the allocation of credit limits across executing counterparties be specified by the customer, rather than the futures commission merchant (FCM), who would confirm the customer’s allocation to the identified executing counterparties."

Contrary to such protests the CFTC asserts that the rules do not prohibit trilateral agreements; rather, they prohibit certain provisions contained in trilateral or bilateral agreements. Further, the CFTC emphasizes that nothing in these rules would restrain a swap dealer (SD) or major swap participant (MSP) from establishing bilateral limits with each of its counterparties, much less impair a SD’s or MSP’s ability to conduct due diligence on each of its counterparties.

In fact, rather than discouraging competition, the law prohibits an SD or MSP from adopting any process that imposes any material anti-competitive burden on trading or clearing. In addition, derivatives clearing organization (DCO) rules provide for the non-discriminatory clearing of swaps.

This would seem conceptually amenable, but it is argued that pre- and post-trade uncertainty caused by a delay between the time of trade execution and the time of trade acceptance into clearing, would undermine market integrity, and by implication impede liquidity, efficiency and market stability.

Accordingly, the CFTC revised language to clarify that, for swaps that will be submitted for clearing, an SD or MSP may continue to manage its risk by limiting its exposure to the counterparty with whom it is trading. This clarification is intended to both emphasize the need to conduct appropriate risk management, as well as address the concern that until straight through processing is achieved, SDs and MSPs will still need to manage risk to a counterparty before a trade is accepted or rejected for clearing.

And therein lies the crux of the matter. For prompt and efficient clearing to occur, the rules, procedures and operational systems of the trading platform and the clearinghouse must align. Vertically integrated trading and clearing systems currently process high volumes of transaction quickly and efficiently. But they also form a monopoly.

Under the distributed structure contemplated by Title VII, each SEF and designated contract market (DCM) is required to assure equal access to all DCOs that wish to clear trades executed through the facilities of the SEC or DCM.

The technological issue then is minimizing the time between trade execution and acceptance into clearing. This time lag potentially presents credit risk to the swap counterparties, clearing members, and the DCO because the value of a position may change significantly between the time of execution and the time of novation, thereby allowing financial exposure to accumulate in the absence of daily mark-to-market.

Thus, what is not often discussed in the political furor over Dodd-Frank is how this legislation is driving industry participants toward “prompt, efficient, and accurate processing of trades” while simultaneously encouraging a competition. An initiative to improve and better integrate front to back office processing on such a large scale has not been seen by the industry since the “paper crunch” of the 1970s, and the passing of the Securities Act Amendments of 1975. In our opinion, it’s about time.


[1] Edward Wyatt, Dodd-Frank Act a Favorite Target for Republicans Laying Blame, New York Times, September 20, 2011 

[2] Michael Greenberger, Will Wall Street prevail? The Baltimore Sun, October 8, 2012. 

[3] The Alternative Investment Management Association Ltd; Javelin Capital Markets, LLC; Societe Generale; Asset Management Group of the Securities Industry and Financial Markets Association (SIFMA); Spring Trading, Inc.; Vanguard. 

[4] Bank of America; Merrill Lynch; BNP Paribas; Citi; Credit Suisse Securities (USA) LLC; Goldman Sachs; HSBC; J.P. Morgan; Deutsche Bank; Edison Electric Institute; ISDA; Morgan Stanley; Societe Generale; UBS Securities LLC.

Friday, October 19, 2012

Thoughts on ISDA SIFMA v. U.S. CFTC

In reading Judge Wilkins Memorandum Opinion regarding Civil Action No. 11-cv-2146 (RLW), in which ISDA and SIFMA challenged the CFTC on position limits on derivatives tied to 28 physical commodities, we are reminded of an insightful line from a paper on behavioral finance:
"What has happened is that we've used these assumptions for so long that we've forgotten that we've merely made assumptions, and we've come to believe that the world is necessarily this way." ISDA SIFMA v US CFTC - Civil Action 11-Cv-2146 - Memorandum Opinion

The underlying flaw in the law is the supposition that "excessive speculation" is an eye-of-the-beholder standard, not black letter law. The same can be said of the phrase, "undue and unnecessary burden on interstate commerce". Thus, in a semantic tour de force the ruling reveals two insights into the rule making process.

First is the reliance on economic thought, and how philosophical disagreements amongst economists within the so-called "dismal science" is the "gift that keeps on giving" with respect to obfuscating rule-making intent. In an "appeal to authority" Judge Wilkins references various CFTC Commissioners prior statements forecasting the Plaintiff's argument as to the need for "statutorily-required findings of necessity prior to promulgating the Position Limits Rule". Given that the veracity of "position limits" ability to constrain "excessive speculation" is something economists will dispute infinitum, the seeds of the Rule's destruction is fait accompli.

Second, given that the law states that limits for exempt commodities are required to be established within 180 days after July 21, 2010, and that limits for agricultural commodities are required to be established within 270 days after July 21, 2010, the CFTC was faced with a conundrum. How can it both comply with the law's deadline requirement, and serve the needs of its constituency which generally stands against imposition of position limits [see comment letters]?

The answer seems to have been in the CFTC's decision to avoid first performing "any reliable economic analysis," as suggested by Commissioner Dunn, lending support to the Plaintiffs claims:
  1. Violation of the CEA and APA--Failure to Determine the Rule to be Necessary and Appropriate under 7 U.S.C. $ 6a(a)(1), a(2)(A), (a)(5)(A))
  2. Violation of the CEA--Insufficient Evaluation of Costs and Benefits under 7 U.S.C. $ 19(a) 
  3. Violation of the APA--Arbitrary and Capricious Agency Action in Promulgating the Position Limits Rule 
  4. Violation of the APA--Arbitrary and Capricious Agency Action in Establishing Specific Position Limits and Adopting Related Requirements and Restrictions 
  5. Violation of the APA--Failure to Provide Interested Persons A Sufficient Opportunity to Meaningfully Participate in the Rulemaking 
The above arguments are not a one-off situation. In many of the Final Rules promulgated by the CFTC, the agency has not engaged in cost-benefit analysis. Depending on which way the election goes, one can assume that if President Obama wins re-election, in the years ahead we'll be seeing more of these kinds of cases being brought before court with this ruling providing precedent. Should that scenario prevail, we may not see full implementation of Title VII for many years, if ever.

The irony for the industry is the ongoing uncertainty surrounding the applicability of the CFTC's Final Rules involving Title VII. In other words, what may suffer most from an industry strategy to undermine Dodd-Frank through the courts is SIFMA's own claimed mission, that of "building trust and confidence in the financial markets".

Regardless, we think that this ruling's potential impact on all other Final Rules issued by the CFTC is significant. It sets precedence and legitimizes an avenue of attack to pull the rug from CFTC's reliance on a 1981 rulemaking upon which it assumed that cost-benefit analysis can be avoided.

Monday, June 28, 2010

Giant Shoulders and the Theory of Storage

There is a saying about the pursuit of knowledge: “we are like dwarfs on the shoulders of giants.”[1] Such truism certainly applies to commodity pricing theory which includes: (1) the insurance aspect of commodity futures contracts emphasizing the role of the speculator; (2) the theory of storage, which is focused on the behavior of the inventory holder and commercial hedger; (3) the net-hedging-pressure hypothesis, which encompasses the behavior of both classes of participant; (4) the statistical behavior of commodity futures prices; (5) the attempt to reconcile commodity futures returns with the CAPM; (6) the role of commodities in a strategic asset allocation; and (7) the importance of yields as a long-term driver of commodity returns.[2] This article investigates the “theory of storage”[3] starting with the ideas of Holbrook Working.

It should be obvious that holders of commodities incur a storage cost for financing and storing inventories, including warehousing, handling charges and insurance. The conundrum is that not infrequently prices of the deferred futures are below that of the nearby future and/or spot price. When such situation occurs the market is said to reflect “inverse carrying charge” or “backwardation.” Much academic study has been dedicated to this seemingly strange phenomenon, and in providing a satisfactory answer as to why such conditions exist.

Working, whose 1949 paper The Theory of Price of Storage evolved from Keynes’ writings on commodity pricing, investigates the “theoretical problem… that it is existing supply rather than expected change in the supply which is involved in determining inter-temporal price relations.” This statement is more interesting if one goes back to Working’s 1948 article, Theory of the Inverse Carrying Charge in Futures Markets, in which he discusses “four different lines of attempt to explain inverse carrying charges.”
There seems to be substantial agreement among writers on futures markets that positive carrying charges tend to reflect marginal net costs of storage, and that when carrying charges are positive, prices of near and distant futures must respond about equally to any causes of price change. With regard to inverse carrying charges there is more difference of opinion and much reason for the student of futures markets to be dissatisfied with the present state of theory. How shall one explain a large inverse carrying charge between December and May in a United States wheat market? Do inverse carrying charges reliably forecast price declines? When prices of deferred futures fall below the spot price, does the futures market tend to lose effective connection with the cash market? These are some specific questions that call for answer and to which no satisfactory reply seems to be offered by prevailing theory.
Theory of the Inverse Carrying Charge - Working

The first explanation that Working (1948) explores is the idea that “cash and futures prices, though related, are not equivalents aside from the time element.” In the narrow sense the basis difference may be due to quality differentials or delivery locations.[4] However, as Working noted even back in the 1940s, the cash market is “clearly subsidiary from the standpoint of price formation” and that cash buyers and sellers ordinarily “bargain in terms of cents ‘over’ and cents ‘under’.” More broadly, it can be argued that “cash and futures prices may differ because they reflect the opinions of substantially different groups of traders.” However, this implies that cash-futures arbitrage is ineffective. In this regard Working points out that hedging is essentially a form of arbitrage, and notes that hedging persists even when markets are backwardated.

Next, Working addresses the view that futures prices tend to have a downward bias due to risk aversion. Keynes’ (1930) posits that producers are willing remunerate speculators in order “to avoid the risk of price fluctuations… during the period of production” causing the spot price to exceed the forward price. Vance (1946) adds “that future events always bear some degree of uncertainty is perhaps sufficient to justify a discounting of expectations.” Working admits that such ideas probably have some validity, but points out that risk avoidance can be applied to “possible future events” which are “price-elevating” as well as “price-depressing.” Rather, risk aversion is “pertinent as partial explanation of the ‘supply curve for storage’.”

The third point that Working disparages is “the belief that price differences between futures commonly reflect expectations regarding future developments.” Working argues that there is a continuity between nearby futures and deferred futures to the extent that “expectations arising from an existing supply situation” have a bearing on “expectations regarding future supply or demand developments” and vice-versa. This hypothesis is substantiated by empirical research in which evidence of “large changes in supply prospect on price relations” between nearby and distant futures contracts “lasted for not more than a week or two”, and was “followed quickly by a return toward the previous price relation” prior to the disturbance.

Working’s conclusion is that inverse carrying charges are best “explained in terms of the concept of price of storage.” First, “spot and futures prices for a commodity are intimately connected at all times” and “do not, in general, measure expected consequences of future developments.” Second, “inverse carrying charges are reliable indications of current shortage; the forecast of price decline which [the term structure implies] is no more reliable than a forecast… of the current supply situation itself.” Third, inverse carry is explained within a commercial context whereby hedgers “are willing to risk loss on a fraction of the stocks for the sake of assurance against having their… activities handicapped by shortage of supplies.”

From these thoughts Working (1949) developed his price-of-storage theory which “exposes clearly the fact that in the presence of hedging much storage does occur in response to a recorded, and competitively determined, assurance of return specifically for the storage itself.” In other words, the futures market “coupled with the practice of hedging, gives potential holders of [a commodity] a precise or at least a good approximate index of the return to be expected from storing [such commodity]. A known return for storage is, in essentials, a price of storage… determined in a free market through the competition of those who seek to supply storage service.”

With respect to inverse carrying charges, Working noted that “the problem [with the theory] tends to emerge clearly only when the price for deferred delivery is below the ‘nearer’ price.” Nevertheless, Working argues that such conditions can be rationalized by Kaldor’s (1939) ‘convenience yield’ whereby “stocks of all goods posses a yield… which is a compensation to the holder of stocks, [and] must be deducted from carrying costs proper in calculating net carrying cost.” In other words, to remain long in business a merchant “must carry stocks beyond known immediate needs and take his return in general customer satisfaction.” Thus “convenience yield may offset what appears as a fairly large loss from exercise of the storage function itself.”[5]

In effect, the price-of-storage theory, in which “positive carrying charges tend to reflect marginal net costs of storage,” can be extended to include negative carrying charges. According to Working (1948, 1949), “the supply-curve relationship between amount of storage and price of storage does not break down when the ‘price’ becomes negative… [which] occur when supplies are relatively scarce.” Rather, “a negative price of storage makes available for consumption in a year of shortage, supplies which would otherwise remain tied up in ‘convenience stocks’.” On the other hand, “if stocks to be stored are exceptionally large, the return for carrying [commodities] may exceed the ‘cost’ of storage… If stocks are quite moderate, competition among firms with storage facilities tends to result in the storage being provided for a rather small return.”

Michael Brennan in his 1958 paper, The Supply of Storage, both clarifies and expands on Working’s (1949) theory through description, formulae and empirical analysis. First he introduces the ‘demand for storage’ equation, Pt+1Pt = ft+1(St + Xt+1St+1) – ft(St-1 + Xt+1St),[6] wherein “the demand curve for storage of a commodity from period t to period t+1 will shift upward (e.g., to D΄D΄ in Figure 1) as result of an increase in production in t,… [and] opposite movements of [this variable] will produce a shift downward.” Brennan then defines the ‘net marginal cost of storage’ in period t as “marginal outlay on physical storage plus a marginal risk-aversion factor minus the marginal convenience yield on stocks.” He also assumes that “marginal outlay is approximately constant until total warehouse capacity is almost fully utilized”, and beyond this, outlay rises at an increasing rate. The net marginal cost of storage equation is mt΄(St) = ot΄(St) + rt΄(St) – ct΄(St).[7]

From such equations Brennan (1958) derives two graphics which elegantly reveal the key dynamics influencing the determination of positive versus negative carry prices. In the first graphic below, DD, D΄D΄ and D΄΄D΄΄ are demand-for-storage curves, and CC is the supply curve for storage. It is noted that the horizontal axis represents supply of storage at end of t, and the vertical axis represents the basis between deferred and nearby futures.

The second graphic reveals the interplay between convenience yield, storage outlay and risk aversion upon the net marginal cost of storage. Importantly, Brennan notes that risk aversion should be an increasing function of inventories held. “If a comparatively small quantity of stocks is held, the risk involved in undertaking the investment in stocks is also small.” However, “there is probably some critical level of stocks at which the loss would seriously endanger the firm’s credit position, and as stocks increase up to this point the risk incurred in holding them will steadily increase also—the risk of loss will constitute a part of the cost of storage.”

Brennan also adds to Kaldor’s and Working’s discussion on the convenience yield by noting it is “attributed to the advantage (in terms of less delay and lower costs) of being able to keep regular customers satisfied or of being able to take advantage of a rise in demand and price without resorting to a revision of the production schedule. Similarly, for a processing firm the availability of stocks as raw materials permits variations in production without incurring the trouble, cost and perhaps delays of frequent spot purchases and deliveries. A wholesaler can vary his sales in response to an increased flow of orders only if he has sufficient stocks on hand. The smaller the level of stocks on hand the greater will be the convenience yield of an additional unit.” Alternatively, “it is assumed that there is some quantity of stocks so large that the marginal convenience yield is zero.” Further, distinction is sometimes made between ‘surplus’ stocks, which Brennan notes are distinguished by a speculative motive, versus ‘pipeline’ or ‘working’ stocks.

To wrap up this article, we leave the reader with Lester Telser’s insights. Telser (1958) analyzed storage in relation to firms’ stockholding schedule, and then related the results of his study to conventionally-held ideas about commodity pricing which emphasizes the role of the speculator and the role of commodity futures contracts as insurance. In certain respects Telser’s findings support Working’s (1949) conclusion that “only some direct explanation of the price relation in terms of an existing condition can account for the fact that expectations regarding future events, which are directly pertinent to a distant forward price, have approximately the same effect on spot and near forward prices as on a distant forward price.”
A widely accepted theory advanced by Keynes and Hicks which relates the futures price and the expected spot price regards hedgers as buyers of insurance and speculators as sellers of insurance who must be induced to bear the risk of price changes. When statistical evidence was examined to see whether futures prices display an upward trend as they approach maturity predicted by this theory, it was found instead that futures prices display no trend. Although hedgers may be willing to pay speculators to bear the risks of price changes, they need not do so if speculators are eager to speculate. Firms that hedge can reduce their price risks at little or no cost to themselves. I accept the hypothesis that the futures price equals the expected spot price.
The forgoing was presented to help educate readers on the “theory of storage”, specifically key concepts in the ever-evolving debate of what factors most influence commodity prices. The conclusion we arrive at as a result of this exercise is the same conclusion we have deduced in other studies—pricing models are first and foremost informed by perspective, and perspective is informed by assumptions. As a result, the legacy of research will always remain inconclusive with respect to modeling the sources of returns in the commodity futures markets largely because these models have inherent shortcomings in being able to pinpoint an authoritative source of structural risk premium within the complexity of such markets.

[1] “Dwarfs standing on the shoulders of giants” (Latin: nanos gigantium humeris insidentes) is a Western metaphor meaning, one who develops intellectual works by understanding the research created by notable thinkers of the past. The saying is attributed to Bernard of Chartres due to John of Salisbury’s reference in Metalogicon (1159).

[2] Hilary Till (2007). “Part I of A Long-Term Perspective on Commodity Futures Returns: Review of the Historical Literature” Intelligent Commodity Investing, Till and Eagleeye, Ed., London: Risk Books, p 39.

[3] In the paper, The Supply of Storage, Brennan (1958) clarifies that “supply of storage refers not to the supply of storage space but to the supply of commodities as inventories. In general, a supplier of storage is anyone who holds title to stocks with a view to their future sale, either in their present or in a modified form.”

[4] In the context of financial futures, “basis” is defined as spot price minus the futures price. There is a different basis for each delivery month for each contract.

[5] “One condition which makes [inverse carrying charges] possible is the fact that storage of [commodities] is an enterprise in which most of the costs are fixed costs, from a short-run standpoint. Another important condition is that for most of the potential suppliers of storage, the costs are joint; the owners of large storage facilities are mostly engaged either in merchandising or in processing, and maintain storage facilities largely as a necessary adjunct to their merchandising or processing business. And not only are the facilities an adjunct; the exercise of the storing function itself is a necessary adjunct to the merchandising or processing business. Consequently, the direct costs of storing over some specified period as well as the indirect costs may be charged against the associated business which remains profitable, and so also may what appear as direct losses on the storage operation itself.” (Working, 1949)

[6] Let Pt be the price in period t and let Ct be consumption during t. Consumption in any period equals stocks carried into the period plus current production minus stocks carried out of the period. Consequently consumption, ft(Ct), can be rewritten as Pt = ft(St-1 + XtSt), where St-1 is stocks at the end of period t-1, Xt is production during t and St is stocks at the end of t. In general, price in the next period minus price in the current period may be expressed as a decreasing function of stocks carried out of the current period. Symbolically the demand for storage from period t to period t+1 can be represented as Pt+1Pt = ft+1(Ct+1) – ft(Ct), which is equivalent to formula presented in article.

[7] mt(St) is the net total cost of storage and mt΄(St) is the net marginal cost of storage. Again let St denote the stocks carried out of period t. Let ot(St) be the total outlay on physical storage where ot΄>0, rt(St) the total risk-aversion factor where rt΄>0, and ct(St) the total convenience yield where ct΄≥0. The net marginal cost of storage need not be positive.

Blas, Javier, “Growing demand for bullion hands banks gold opportunity in storage” Financial Times, June 12, 2010.

Brennan, M. (1958). “The Supply of Storage” American Economic Review, 47(1), pp 50-72.

Fama, Eugene F. and French, Kenneth R. (1988). “Business Cycles and the Behavior of Metals Prices” Journal of Finance, 43(5), pp 1075-1093.

Keynes, John Maynard (1930). A Treatise on Money, Volume II: The Applied Theory of Money, London: Macmillan, 1930, pp 142-147.

Kaldor, Nicholas (1939). “Speculation and Economic Stability” Review of Economic Studies, 7(1), pp 1-27.

Telser, L.G. (1958). “Futures Trading and the Storage of Cotton and Wheat” Journal of Political Economy, 66, pp 223-255.

Working, Holbrook (1948). “Theory of the Inverse Carrying Charge in Futures Markets” Journal of Farm Economics, 30(1), pp 1-28.

Working, Holbrook (1949). “The Theory of Price of Storage” American Economic Review, 39(6), December 1949, pp 1254-1262.

Tuesday, May 18, 2010

Gold Loans and Reversing a Model’s Line of Causation


The 1970s was a crucial turning point in the history of 20th century gold markets. The costs of the Vietnam War and increased domestic spending had the effect of accelerating inflation. Meanwhile, US gold stock declined to $10 billion versus outstanding foreign dollar holdings estimated at about $80 billion.[1] Prior to that, the London Gold Pool made up of seven European central banks and the US Federal Reserve, a group which cooperated in maintaining the Bretton Woods System, found itself increasingly unable to balance the outflow of gold reserves and defend the fixed gold price of US$35.[2]

On August 15, 1971, President Nixon, a self-proclaimed Republican “conservative,”[3] imposed a 90-day wage and price control program and other various expansionary fiscal policies in what became known as the “Nixon Shock”[4]. Most importantly, Nixon closed the gold window to prevent foreign governments that had been holding dollar-denominated financial assets from demanding gold in exchange for their dollars. By March 1973, all of the major world currencies were floating and in November 1975, the G-7 (i.e, Group of Seven) formed to hammer out the final details on a framework for a new monetary system. That agreement, which was finalized in January 1976, called for an end to the role of gold, the establishment of SDRs as the principal reserve asset, and legitimized the de facto system of fiat currencies and floating exchange rates.

The reason for retelling this story is because these events, along with a collapse in gold prices after peaking on January 21, 1980 at the high price of $850, led directly to formation of the gold leasing market during the mid-1980s. Gold loans evolved as a means for central banks to earn a return on their bullion inventories to cover the cost of warehousing bullion[5][6] by leasing gold in exchange for a lease rate. This rate is derived from the difference between the LIBOR and Gold Forward Offered (GOFO) rate.[7] Alternatively, a central bank could swap gold in exchange for currency such as US dollars.

A leasing transaction involves a central bank transferring ownership to a leasing institution (i.e., borrower), who could then sell the gold on the spot market and invest the proceeds. At a later date, the borrower would buy back the gold and return it to the central bank while paying the lease rate. Because gold could be leased at a relatively low rate from the central bank and then sold quickly on the spot market, participants in this market included gold producers who thereby gained cash to finance gold production at a comparatively low rate of interest, while simultaneously hedging against falling gold prices.[8]

The market for gold loans developed quickly after the October 1987 stock market crash left many mining companies with reduced access to capital. Prior to 1990, GOFO rates for gold normally were below 2 percent on an annualized basis and never exceeded 3 percent, providing an inexpensive source of finance for mining companies.[9] The Financial Times reported that some 30 central banks were estimated to have engaged in gold loans around this time.[10] Then in 1990 Drexel Burnham Lambert collapsed with large outstanding gold liabilities to many central banks, resulting in increased wariness and reduced supply of gold loans from central banks.[11] As a result, lease rates rose reflecting an increased tightness in the market after the loss of central bank suppliers, as well as a substantial risk premium over the implicit cost of providing such loans.

Nevertheless, the market for gold loans grew throughout the 1990s, and an informal global interbank system developed permitting dealers to borrow gold on a short-term basis in order to fulfill delivery requirements. When bullion subsequently dropped below $300 an ounce in late 1997, and drifted in that range through 2002 in what is now referred to as the “Brown Bottom,”[12] the gold carry trade came to dominate the derivatives markets. Gold’s steady appreciation since 2002, however, has rendered this trade obsolete. As a result, there has been a wholesale transformation in the gold market since the millennium began.

In a research paper published by the Swiss Finance Institute (SFI) titled, On the Lease Rate the Convenience Yield and Speculative Effects in the Gold Futures Market, the authors examine this aspect of the gold market in detail. They note that, “…since late 2001, the profitability of the carry trade has diminished. Rising gold prices have increased risk and diminished the trade’s profitability as a result of increasing repayment costs. Consequently, the prevalence of the gold carry trade is predicated on two factors: the rate at which the central bank is willing to swap or lease gold, and whether or not the gold price is increasing.” Further, the authors Barone-Adesi, Geman and Theal (2009) observe that the COMEX “is witnessing historically low derived lease rates, decreasing hedging activity and steadily rising non-commercial open interest.”

The reason why is because the gold carry trade is risky on two dimensions. First, if the borrower invests in long-term bonds, rising interest rates could cause downward pressure on bond prices exposing the leasing institution to principal risk. Second, since the borrower is effectively short gold, if the loan is called by the central bank and gold has risen in value, they may have to purchase gold at a higher price in the spot market. Hence, there always exists the potential of driving up gold prices even higher due to short covering. This unwinding of the carry trade, as with other similar trades (e.g., yen carry trade), can result in volatile markets.

The question then is to what extent is speculation having a “tangible effect” on gold valuations, and “if so, by what mechanism does speculation influence prices?” The SFI paper points out other academics, such as Kocagil (1997), who defined “speculative intensity” as the “spread between the futures and expected spot price,” and concluded that “speculation increases spot price volatility and thus has a destabilizing effect on price.” Another researcher, Abken (1980), based his analysis on the intuition that the only return that gold yields is based on the anticipated appreciation of gold above “any marginal costs associated with the storage of gold.” Abken argues that, “during times of uncertainty, excess demand for gold as a store of value [drives] up the spot price causing stored gold to be brought to market.”

The authors of the SFI paper, on the other hand, base part of their methodology on the work of Houthakker (1957), one of the first researchers to use trader commitment data to study speculation. To understand how speculative agents can affect the gold futures market, Barone-Adesi et al. (2009) examine the open interest data from the CFTC Commitment of Traders (CoT) report, thereby identifying commercial open interest with hedging activity, and conversely, non-commercial positions with speculative activity. The authors also study the relationship between gold leasing and the level of COMEX discretionary inventory.

Not surprisingly, Barone-Adesi et al. (2009) arrive at some obvious conclusions: First, they note an ever-increasing percentage of non-commercial open interest reflects increased speculation in the gold market. Second, “the lease rate and the speculative pressure appear to work in opposition to one another; the former acts to decrease short-term bullion inventories via lease repayments, while the latter result suggests speculators dominate leasing activity in the long term… Finally, the presence of speculation in gold futures contracts can be associated with increased futures contract returns and that this effect increases with increased futures contract maturity.” What these observations suggest in their entirety is that “speculation plays a significant role in the COMEX gold futures market” as opposed to hedging activities.

Uh, okay… but isn’t this a foregone conclusion? Albeit, On the Lease Rate the Convenience Yield and Speculative Effects in the Gold Futures Market derives its determinations from some interesting theoretical ideas between the relationship of gold loans, bullion inventories, convenience yield and speculation; but in the final analysis this paper raises the specter of Muth’s (1961) Rational Expectations and the Theory of Price Movements: “In order to explain fairly simply how expectations are formed, we advance the hypothesis that they are essentially the same as the predictions of the relevant economic theory.”

In other words, models unfortunately have the bad habit of assuming a predetermined conclusion around which expectations are formed, which in effect reverse the model’s line of causation. Our conclusion: research bias, the process where the scientists performing the research influence the results in order to portray a certain outcome, seems to be at work here—even though we happen to agree with Barone-Adesi, Geman and Theal's conclusions.

On the Lease Rate and Convenience Yield of Gold Futures

[1] Spero, Joan Edelman, and Hart, Jeffrey A. (2010). The Politics of International Economic Relations. 7th ed. (originally published 1977). Boston, MA: Wadsworth Cengage Learning.

[2] Bordo, Michael D., and Barry J. Eichengreen (1993). A Retrospective on the Bretton Woods System: Lessons for International Monetary Reform. University of Chicago Press. pp. 461–494 “Chapter 9, Collapse of the Bretton Woods Fixed Rate Exchange System” by Peter M. Garber.

[3] Nixon tape conversation No. 607-11.

[4] “The Economy: Changing the World's Money” Time Magazine, Oct. 4, 1971 [First reference by Time of “Nixon Shock”];,9171,905418,00.html

[5] “Bullish on Bullion” by Peter Madigan, Risk Magazine, Feb. 1, 2008, Incisive Media Ltd.

[6] According to O’Callaghan, Gary (1991), two key disadvantages in holding gold as opposed to a financial instrument are storage costs and the fact that holding gold does not bear interest.

[7] Barone-Adesi, Giovanni, Geman, Hélyette and Theal, John (2009). “On the Lease Rate, the Convenience Yield and Speculative Effects in the Gold Futures Market” (March 12, 2009). Swiss Finance Institute Research Paper No. 09-07.

[8] O’Callaghan, Gary (1991). "The Structure and Operation of the World Gold Market" International Monetary Fund, IMF Working Paper WP/91/120, Master Files Room C-525, p 33.

[9] Ibid. pp 33-34.

[10] Gooding, Kenneth, “Gold Lending Rate at Record Level,” Financial Times (London), Dec. 4, 1990, p 34.

[11] “Fool’s Gold,” The Economist, Mar. 17, 1990, p 79.

[12] Term used to describe the period between 1999 and 2002, named from the decision of Gordon Brown, then the UK's Chancellor of the Exchequer to sell half of the UK's gold reserves in a series of auctions.

Barone-Adesi, Giovanni, Geman, Hélyette and Theal, John (2009). “On the Lease Rate, the Convenience Yield and Speculative Effects in the Gold Futures Market” (March 12, 2009). Swiss Finance Institute Research Paper No. 09-07.

Bordo, Michael D., and Barry J. Eichengreen (1993). A Retrospective on the Bretton Woods System: Lessons for International Monetary Reform. University of Chicago Press. pp. 461–494 “Chapter 9, Collapse of the Bretton Woods Fixed Rate Exchange System” by Peter M. Garber.

O’Callaghan, Gary (1991). “The Structure and Operation of the World Gold Market” International Monetary Fund, IMF Working Paper WP/91/120, Master Files Room C-525

Spero, Joan Edelman, and Hart, Jeffrey A. (2010). The Politics of International Economic Relations. 7th ed. (originally published 1977). Boston, MA: Wadsworth Cengage Learning.

Saturday, May 1, 2010

Prisoners Dilemma on the Paradox of Solution

Of all the inhabitants of the inferno, none but Lucifer knows that hell is hell, and the secret function of purgatory is to make of heaven an effective reality. ~Arnold Bennett, English novelist (1867 – 1931)

Martin A. Armstrong is a man who is spending time in purgatory—both literally and figuratively. For those not aware, Armstrong is the former chairman of Princeton Economics International Ltd. indicted in 1999 on charges of bilking Japanese investors. He languished seven years in jail for contempt of court, supposedly the longest someone has ever been held without trial in United States history. All the while, Armstrong claimed that he was innocent of the fraud itself before finally pleading guilty in 2007. He is now serving a five-year sentence for conspiracy to commit fraud.[1][2]

Said to be “bizarrre (sic) and eccentric,” this “notorious… self-professed expert in the history of money and things gold”[3] is also a prolific writer whose narrative is the kind of grist for the mill influencing Matt Taibbi’s “investigative journalism”.[4] It is therefore not surprising, given Armstrong’s incarceration and his self-claimed title as “America’s #1 Political Prisoner,” to discover that his writings are tinged with conspiracy theories. On the other hand, as Kissinger once noted, “Even a paranoid can have enemies.”[5] Regardless you are probably asking, why devote time discussing The Paradox of Solution if Armstrong’s credibility is questionable? [Article continues below embedded paper.]

The Paradox of Solution 4-18-10

To begin with, notwithstanding the background context it’s a short and interesting read, and for someone who you would expect to “rail against the machine” it comes as a surprise to learn that Armstrong defends the establishment. Well, at least in regards to the “original intent” of the Federal Reserve and his implied support for the Volker rule. On this and the U.S. Government’s evolving policy response to the financial crisis, Armstrong reveals himself to be a man seemingly caught between faith and skepticism. His voice predictably adds to the chorus of doomsayers, but then he lends ineffectual advice as to how we can avoid our fate.

Second, while some may argue that Armstrong suffers from the Dunning–Kruger effect, the literary tradition of the wise fool who “speakest but sad truths”[6] provides reason enough to read The Paradox of Solution. As William S. Burroughs’ once said, “Sometimes paranoia’s just having all the facts.” And where Armstrong’s essay becomes truly fascinating is in our humble opinion the last third of the article where he begins to discuss that “meaningless and intangible social construct,”[7] money.
When money was tangible, debasement was the game. Hence, I’m sorry, but a GOLD & SILVER STANDARD will not eliminate inflation! Those who argue for “sound money” just do not understand that what is money, is far less important than WHO controls money. Politicians from ancient times have always sought to make just a little bit more to cover their spending that has never been responsible. Some have needed money desperately to defend the nation as was the case in Athens. Others just wanted to spend more to have nice things like Nero. Regardless of the reason, it is WHO controls the quantity and quality of money that truly matters—NOT what is actually money!
There is a video on the Intertubes[8] featuring a “steel cage death match” on the subject of “inflation versus deflation” between James Grant, founder of the venerable Grant's Interest Rate Observer, and the well-respected David Rosenberg, chief economist with Canada’s Gluskin Sheff & Associates [see:]. Say what you will about Armstrong, but in both this debate and a March 31, 2010 Bloomberg interview with Keith McCullough, James Grant had the nerve to ask a most important matter of faith, ‘what is the US dollar?’
But maybe that’s the trouble with Treasuries. The fundamental trouble is that they are IOUs denominated in a currency that nobody knows what it is. What’s a dollar? What’s a dollar? David can’t tell me. Nobody can tell me. If Alexander Hamilton were here, and unbriefed (sic) on the events of the past two hundred years, he could tell me. And what he would say is that the dollar is defined as 371 and ¼ grains of pure silver, or 24 and ¾ grains of pure gold. And it was too under the 1792 Coinage Act which Hamilton himself wrote. And those were the definitions, and incidentally, Section 19 of that landmark monetary legislation ends in these telling words, quote “shall suffer death” closed quote. Now, can you tell me what kind of felony was punishable by death under an Act to regulate the mint? The act of debasing the currency was—that was a capital offense. So watch your back Ben Bernanke [audience laughter], cycles turn. [Time: 8’50”]
Nevertheless, “cash is king,” especially in a liquidity crisis; however, for cash to function as a means to discharge debt, then the largely unasked question is: what defines cash as cash?[9] Such philosophical musings may seem from the realm of madmen or fools, but when the highly respected James Grant half-jokingly references the following satirical article by The Onion, U.S. Economy Grinds to Halt As Nation Realizes Money Just A Symbolic, Mutually Shared Illusion,[10] you know that something is amiss. But what exactly?

Armstrong, alluding to a collective fear of what the future holds for “Western dominance,” strikes at the very heart of the matter. In hunter-gather societies, “there is no TRADE because there is no concept of the future as we understand it… In order for society to have created a concept of money, there had to have been the realization that there is a tomorrow. Something of VALUE is now to be stored and retained… Once this concept of future was born yielding the idea of VALUE, everything from banking to derivatives began to emerge even in ancient times… The concept of barter thus emerged with the concept of future. Then, money emerged as a universal language of barter… The wealth of a nation emerged from the productive forces of its people.”

According to Ludwig von Mises, “The concept of money as a creature of law and the state is clearly untenable. It is not justified by a single phenomenon of the market. To ascribe to the state the power of dictating the laws of exchange, is to ignore the fundamental principles of money-using society.”[11] Armstrong echoes this sentiment, “It is nice to create a unrealistic view of a world where all we have to do is restore a gold standard and all will be well.” But then he turns around and disagrees with Mises, “It is not what is money, but who controls it that has always counted.”

Armstrong seems to view the world as a top-down establishmentarian exercise by a political class, which in turn he rails against. But with respect to the “control” of money, it has not always been true that this was the function of government. Alessandri and Haldane (2009), who reference Reinhart and Rogoff (2009) in their paper Banking on the State, clarify monetary history for us: “From the earliest times, the relationship between banks and the state was often rocky. Sovereign default on loans was an everyday hazard for the banks, especially among states vanquished in war. Indeed, through the ages sovereign default has been the single biggest cause of banking collapse.”

John Locke once wrote, “Credit is nothing but the expectation of money, within some limited time.”[12] As Grant (1994) noted, “To credit is to believe and to lend money it is necessary to trust someone.” Yet, financial history is rife with periods when “prolonged prosperity wore down the skepticism of creditors” only to result in eras of economic hardships.[13] It seems that we have entered into such a period with Chinese officials and some leading economists wanting a greater role for Special Drawing Rights (SDRs) in foreign exchange reserves. Yet, as Aiyar (2009) noted in his paper, An International Monetary Fund Currency to Rival the Dollar? “…the SDR is not a currency in its own right. Rather, it is a derivative of four national currencies. A derivative is not a currency.”

Armstrong, to be fair, taps into the current zeitgeist of confusion regarding the Frankenstein nature of our financial system. As Selgin (2010), senior fellow at the Cato Institute, affirmed: “the Jekyll and Hyde nature of contemporary central banks… has made apparent our utter dependence on such banks as instruments for assuring the continuous flow of credit in the aftermath of a financial bust and the same institutions’ capacity to fuel the financial booms that make severe busts possible in the first place.” In that one line Selgin actually does a better job in describing Armstrong’s so-called “paradox of solution”.

In the final analysis, however, Armstrong’s ramblings are indeed perplexing. He is anti-Washington, anti-SEC, and anti-CFTC, but at the same time he wants to “regulate the REPO market to eliminate posting collateral that explodes in the middle of the night.” Is this to be done by a neutered Federal Reserve? According to Armstrong who positions himself as a Fed apologist, “The Fed is incapable to [interest rate] management and should just raise or lower the reserve ratio banks must put up at the Fed.” As the saying goes, with friends like these who needs enemies.

Maybe Armstrong is just economic literature’s “wise fool” reincarnate into real life. Albeit, the world is now a place in which Joseph Stiglitz, the Nobel Prize-winning economist and Columbia University professor, speaks against the orthodoxy of “rational expectations” asserting that economists are among those at fault for the financial crisis, which exposed “major flaws” in prevailing ideas.[14] Wise fool or not, Armstrong comes across as a man who seeks redemption by trying to save us from ourselves. Then again, the same could be said about Fed Chairman, Ben Bernanke. Perhaps Sir Walter Scott put it best in his novel, Ivanhoe: “Our heads are in the lion's mouth,” said Wamba, in a whisper to Gurth, “get them out how we can.”[15]

[1] Martin A. Armstrong. (2010, April 9). In Wikipedia, Free Encyclopedia. Retrieved: May 1, 2010, from

[2] “Japanese Regulators Get a 2d 'Scalp' Under Their Belts.” Stephanie Strom, New York Times, September 17, 1999.

[3] “The Enigma of Martin Armstrong.” January 04, 2009.

[4] “What Is Ture/Slant?” Matt Taibbi, Taibblog

[5] Source: “His Legacy: Realism and Allure.”, Jan. 24, 1977.

[6] Scott, W., Sir, 1771-1832. Ivanhoe, by Sir Walter Scott. 39015036703273. Edinburgh, A. & C. Black.

[7] “U.S. Economy Grinds to Halt As Nation Realizes Money Just a Symbolic, Mutually Shared Illusion.” The Onion, Inc., Issue 46-07, February 16, 2010.

[8] Etymology: chiefly Internet slang, humorous, from the “Series of tubes” analogy used on June 28, 2006 by then United States Senator Ted Stevens to describe the Internet.

[9] Frankfurter, Michael “Mack” (2010). “The Mysterious Case of Hamilton’s Monetary Enterprise, the Financial Instability Hypothesis, and Exter’s Inverted Pyramid.” Draft Article, February 15, 2010.

[10] “U.S. Economy Grinds to Halt As Nation Realizes Money Just a Symbolic, Mutually Shared Illusion.” The Onion, Inc., Issue 46-07, February 16, 2010.

[11] Von Mises, Ludwig and Greaves, Percy L. (1978). “On the Manipulation of Money and Credit.” Dobbs Ferry, N.Y.: Free Market Books, p. 69.

[12] Source: “Pleas for peace and union against political intolerance and sectional animosity: Speeches of Thomas F. Bayard, R. E. Withers, S. B. Maxey in the Senate of the United States, March 30, 1876, with the speech of George S. Boutwell.” Harvard University, 1876. Digital Copy, p. 74. “Speech of Hon. Thomas Francis Bayard, of Delaware, in the United States Senate, May 27, 1878.”

[13] Grant, James (1994). “Money of the mind: Borrowing and lending in America from the Civil War to Michael Milken.” New York: Noonday Press, pp. 5 & 8.

[14] “Stiglitz Says Crisis Exposed ‘Major Flaws’ in Economic Ideas.” Scott Lanman, BusinessWeek, January 2, 2010.

[15] Scott, W., Sir, 1771-1832. Ivanhoe, by Sir Walter Scott. 39015036703273. Edinburgh, A. & C. Black.

Aiyar, Swaminathan S. (2009). “An International Monetary Fund Currency to Rival the Dollar?” The Cato Institute, Center for Global Liberty & Prosperity, Development Policy Analysis, No. 10, July 7, 2009.

Frankfurter, Michael “Mack” (2010). “The Mysterious Case of Hamilton’s Monetary Enterprise, the Financial Instability Hypothesis, and Exter’s Inverted Pyramid.” Docstoc, Draft Article, February 15, 2010.

Grant, James (1994). “Money of the mind: Borrowing and lending in America from the Civil War to Michael Milken.” New York: Noonday Press.

Von Mises, Ludwig and Greaves, Percy L. (1978). “On the Manipulation of Money and Credit.” Dobbs Ferry, N.Y.: Free Market Books.

Haldane, Andrew G. and Alessandri, Piergiorgio (2009). “Banking on the State.” Bank of England. Based on a presentation delivered at the Federal Reserve Bank of Chicago twelfth annual International Banking Conference on “The International Financial Crisis: Have the Rules of Finance Changed?” 25 September 2009.

Reinhart, C. M and Rogoff, K (2009). “This Time is Different: Eight Centuries of
Financial Folly.” Princeton University Press.

Selgin, George (2010). “Central Banks as Sources of Financial Instability.” The Cato Institute, The Independent Review, V. 14, N. 4, Spring 2010, pp. 485-496.

Thursday, April 15, 2010

Managed Futures: Is It an Asset Class?

For those unfamiliar with the term managed futures, it is a niche sector of alternative investments that evolved out of the Commodity Futures Trading Commission Act of 1974, and refers to professionally managed assets in the commodity and financial futures markets. Management is facilitated by either Commodity Trading Advisors (CTAs) or Commodity Pool Operators (CPOs) who are regulated by the Commodity Futures Trading Commission (CFTC) and the National Futures Association (NFA).

Managed futures is the little kid brother to the hedge fund juggernaut. Even so, its impact upon the industry is writ large in two significant and related ways: first, managed futures unlike its brethren hedge funds operate in a highly regulated environment; second, this same regulated environment which imposes disclosure and reporting requirements lends itself to fomenting lower barriers of entry for new talent to evolve. Interestingly, the institutionalization of alternative investments can be traced back to the development of managed futures performance tracking databases first established around 1979. This data became the basis for an academic body of research on managed futures beginning with the seminal study by Harvard Business School professor, Dr. John E. Lintner.

So is managed futures an asset class? Let’s cut to the chase… in this writer’s humble opinion the answer is no, absolutely not. Well, maybe we would reconsider if managed futures was confined to just commodity interests, but with futures contracts also trading on financials, managed futures is as much an asset class as are registered investment advisers, mutual funds or hedge funds.

Lee, Malek, Nash and Rose (2006), on the other hand, would beg to differ. Their paper The Beta of Managed Futures makes the case that the predominant strategy in this space is trend following, and thus an appropriate benchmark for managed futures is one that mechanically mimics trend following systems. To say the least, it’s an interesting approach, and one which addresses issues related to peer group analysis and indices based on a composite of individual CTA programs. As Lee, Malek, Nash and Rose posit, “CTA indices represent the result of investing in CTAs, not the results of investing like CTAs.”

The weak part of their thesis, however, has to do with the assumption that managed futures essentially represents just trend following strategies. Lee, Malek, Nash and Rose readily admit that CTAs “employ a wide range of methods” and that such methods are “by no means exhaustive,” and include “breakout systems, systems based on moving averages and systems based on pattern recognition”. They attempt to reconcile this issue by creating a “beta benchmark” that “consists of twenty systems trading the world’s most liquid… markets”. According to their study, they found that their benchmark, for the period analyzed, was highly correlated to large CTAs.

That said, a mechanical trading index approach still leaves questions, including the validity of the trading methods utilized and the robustness of the parameters used to supposedly define the “beta of managed futures”. At a more subtle level, questions are raised by a relatively new concept proposed by Lo (2004) called the Adaptive Markets Hypothesis (AMH). AMH is based on an evolutionary approach to economic interactions and builds on the research of Wilson (1975), Lo (1999) and Farmer (2002) in applying the principles of competition, reproduction and natural selection.

In light of AMH, the paper by Lawrence Harris, The Winners and Losers of the Zero-Sum Game: The Origins of Trading Profits, Price Efficiency and Market Liquidity provides an intellectually honest answer as to the true dynamics underlying managed futures.

The following is from the paper’s abstract; written in 1993, it is not something you’d likely see in an academic paper nowadays: “Trading is a zero-sum game when measured relative to underlying fundamental values. No trader can profit without another trader losing. People trade because they obtain external benefits from trading… Three groups of stylized characteristic traders are examined. Winning traders trade for profit. Utilitarian traders trade because their external benefits of trading are greater than their losses. Futile traders expect to profit but for a variety of reasons their expectation are not realized.”

Harris goes on to discuss the obvious but little acknowledged fact that, “Trading performance reflects a combination of skill and luck. Successful traders may be skilled traders or simply lucky unskilled traders. Likewise, unsuccessful traders may be unskilled traders or unlucky skilled traders... We would like to believe that skill accounts for most variation in past performance among traders and managers,” but “from past performance alone, you cannot confidently determine who is skilled and who is lucky.” Therein lies the conundrum and the alternative investment industry's dirty little secret.

From this 20,000 foot level, the paper drills down and “examines the economics that determine who wins and who loses when trading.” Harris considers “the styles of value-motivated traders, inside informed traders, headline traders, event study traders, dealers, market-makers, specialists, scalpers, day traders, upstairs position traders, block facilitators, market data monitors, electronic proprietary traders, quote-matchers, front-runners, technical traders, chartists, momentum traders, contrarians, pure arbitrageurs, statistical arbitrageurs, pairs traders, risk arbitrageurs, bluffers, ‘pure’ traders, noise traders, hedgers, uninformed investors, indexers, pseudo-informed traders, fledglings and gamblers.” The paper goes on to “describe each of these traders, explain how their trading generates profits or losses, and consider how they affect price efficiency and liquidity.”

Because this paper was written in the early 1990s some of the descriptions may admittedly be dated relative to technological and quantitative developments in the field of trading since. Nevertheless, Winner and Losers of the Zero-Sum Game is a little noticed gem of a working paper whose astute observations ring true even today despite the escalating arms race in academic working papers being spun out of the university-industry revolving door.

Then why is managed futures constantly referred to as an asset class? Answer: out of laziness. However, such laziness goes beyond just the financial industry’s responsibility; truth is, half the problem lies with investors themselves—try as one might to delineate sophisticated investment concepts, the most common reaction is investors’ eyes glazing over.

So if managed futures is not an asset class, then what is it? As with many of the acronyms and lingo that the financial industry regularly comes up with, mainly for marketing reasons, the term has become a misnomer. What started out as an investment activity that was defined by regulations is now conventionally considered by many an asset class. C'est la vie

Winners and Losers of the Zero-Sum Game - Harris

Harris, Lawrence. “The Winners and Losers of the Zero-Sum Game: The Origins of Trading Profits, Price Efficiency and Market Liquidity” School of Business Administration, University of Southern California, Draft 0.911, May 7, 1993.

Lee, Timothy C.; Malek, Marc H.; Nash, Jeffrey T.; and Rose, Jeffrey M. “The Beta of Managed Futures,” Conquest Capital Group LLC, February 2006.

Lintner, John E. “The Potential Role of Managed Commodity—Financial Futures Accounts (and/or Funds) in Portfolios of Stocks and Bonds” Presented at the Annual Conference of the Financial Analysts Federation, May 1983.

Lo, Andrew W. “The Adaptive Markets Hypothesis; Market efficiency from an evolutionary perspective” The Journal of Portfolio Management, 30th Anniversary Issue 2004, pp. 15-29.