tag:blogger.com,1999:blog-1075683210620204272015-04-16T00:58:09.265-07:00Intelligent TradingDiscovering edge using Machine Learning, Data Mining, and Bio Inspired Algorithms to augment traditional Systematic Development.Intelligent Tradinghttp://www.blogger.com/profile/17765336450326139518noreply@blogger.comBlogger43125tag:blogger.com,1999:blog-107568321062020427.post-78719493985048615972015-04-03T00:24:00.000-07:002015-04-04T03:28:27.024-07:00Review: Machine Learning An Algorithmic Perspective 2nd Edition<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-GYMCEK9LAsc/VR2tYc7uRAI/AAAAAAAAAa4/1oXxwbTKJ_w/s1600/ML1.jpg" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-GYMCEK9LAsc/VR2tYc7uRAI/AAAAAAAAAa4/1oXxwbTKJ_w/s1600/ML1.jpg" height="320" width="225" /></a></div><br /><a href="http://www.amazon.com/Machine-Learning-Algorithmic-Perspective-Recognition/dp/1466583282">Fig 1. Machine Learning: An Algorithmic Perspective. 2nd Edition. Stephen Marsland</a><br /><br /><br /><div style="text-align: left;"> I just wanted to briefly share some initial impressions of the 2nd edition of Stephen Marsland's very hands on text, "Machine Learning, <i>An Algorithmic Perspective.</i>" Having been a big fan of the first, I requested a review copy from Dr. Marsland, and with his help, the publishers were kind enough to send me a review copy. I spent the the last few months going over most of the newer topics and testing many of the newer scripts in Python. With that, I'll dive into some of my impressions of the text.</div><div style="text-align: center;"><br /></div> I've stated before, that I thought the 1st edition was hands down, one of the best texts covering applied Machine Learning from a Python perspective. I still consider this to be the case. The text, already extremely broad in scope, has been expanded to cover some very relevant modern topics, including:<br /><ul><li>Particle Filtering (expanded coverage with working implementation in Python).</li><li>Deep Belief Networks</li><li>Gaussian Processes</li><li>Support Vector Machines. Now includes working implementation with cvxopt optimization wrapper. </li></ul><br /> Those topics alone should generate a significant amount of interest from readers. There are several things that separate this text's approach from many of the other texts covering Machine Learning. One, is that the text covers a very wide range of useful topics and algorithms. You rarely find a Machine Learning text with coverage in areas like evolutionary learning (genetic programming) or sampling methods (SIR, Metropolis-Hasting, etc). This is one reason I recommend the text highly to students of MOOC courses like Andrew Ng's excellent 'Machine Learning', or Hastie and Tibshirani's, 'An Introduction to Statistical learning'. Many of these students are looking to expand their set of skills in Machine Learning, with a desire to access working concrete code that they can build and run.<br /><br /> While the book does not overly focus on mathematical proofs and derivations, there is sufficient mathematical coverage that enables the student to follow along and understand the topics. Some knowledge of Linear Algebra and notation is always useful in any Machine Learning course. Also, the text is written in such a way, that if you simply want to cover a topic, such as particle filtering, you don't necessarily need to read all of the prior chapters to follow. This is useful for those readers looking to refresh their knowledge of more modern topics.<br /><br /> I did, occasionally, have to translate some of the Python code to work with Python 3.4. However, the editing was very minimal. For example, print statements in earlier versions did not require parentheses around the print arguments. So you can just change print 'this' to print('this'), for example.<br /><br /> I found the coverage of particle filters and sampling, highly relevant to financial time series-- as we have seen, such distributions often require models that depart from normality assumptions. I might possibly add a tutorial on this, sometime in the future.<br /><br /> In summary, I highly recommend this text to anyone that wants to learn Machine Learning, and finds the best way to augment learning is having access to working, concrete, code examples. In addition, I particularly recommend it to those students that have followed along from more of a Statistical Learning perspective (Ng, Hastie, Tibshirani) and are looking to broaden their knowledge of applications. The updated text is very timely, covering topics that are very popular right now and have little coverage in existing texts in this area. <br /><br />*Anyone wishing to have a deeper look into topics covered and code, can find additional information and code on the <a href="https://seat.massey.ac.nz/personal/s.r.marsland/MLBook.html">author's website</a>.<br /><br />Intelligent Tradinghttp://www.blogger.com/profile/17765336450326139518noreply@blogger.com2tag:blogger.com,1999:blog-107568321062020427.post-51357961281390642542013-04-29T22:24:00.000-07:002013-04-30T01:52:29.487-07:00IBS Reversion (Stingray Plot) Intuition using R vioplot Package<div class="separator" style="clear: both; text-align: center;"></div><div style="text-align: center;"><div class="separator" style="clear: both; text-align: center;"><br /></div><div class="separator" style="clear: both; text-align: center;"></div><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-qF8HaEvQtYA/UX9-0NhV3ZI/AAAAAAAAAY8/Sur8rjBiOjo/s1600/skew3a.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="208" src="http://1.bp.blogspot.com/-qF8HaEvQtYA/UX9-0NhV3ZI/AAAAAAAAAY8/Sur8rjBiOjo/s400/skew3a.jpg" width="400" /></a></div>Fig 1. Vioplot of IBS filtered next day returns on SPY.</div><div style="text-align: center;"><br /></div><div style="text-align: center;"><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-P1hKpjICLjE/UX9-CautlBI/AAAAAAAAAYs/bhn3-FTQKEg/s1600/skew2a.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="81" src="http://2.bp.blogspot.com/-P1hKpjICLjE/UX9-CautlBI/AAAAAAAAAYs/bhn3-FTQKEg/s400/skew2a.jpg" width="400" /></a></div><div style="text-align: justify;"><div style="text-align: center;"> Fig 2. Table of Summary of Results for rtn.tom vs. IBS.class (H,L,M) </div><div style="text-align: center;"><br /></div> A colleague and I were recently discussing ways to get intuition about the <a href="http://intelligenttradingtech.blogspot.com/2013/01/ibs-reversion-edge-with-quantshare.html">IBS classification method</a> for reversion systems. I thought I'd share a <a href="http://cran.r-project.org/web/packages/vioplot/index.html">violin plot</a> I generated that might help to get some visual intuition about it. We can download and process next day returns for an asset like SPY and group classes into LOW (IBS < 0.2), HIGH(IBS > 0.8), and MID(all others). One thing you can see in the plots is the pronounced right skew in the LOW class, and left skew in the HIGH class (they sort of resemble opposing stingrays -- stingray plots might be a more apt term for the reversion phenomena); while the MID class tends to be more symmetrical. The nice thing about the vioplot visualization is that it includes the density shape of the return distribution, which adds intuition over more common box and whisker plots.<br /><div style="text-align: center;"><br /></div><div style="text-align: center;"><br /></div><br /><br /></div>Intelligent Tradinghttp://www.blogger.com/profile/17765336450326139518noreply@blogger.com0tag:blogger.com,1999:blog-107568321062020427.post-33522451515902686302013-03-10T17:31:00.000-07:002013-03-10T17:50:34.304-07:00Is CTA trend following Dead?<div class="separator" style="clear: both; text-align: center;"></div><div class="separator" style="clear: both; text-align: center;"></div><div class="separator" style="clear: both; text-align: center;"></div><div class="separator" style="clear: both; text-align: center;"></div><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-zOM6lysya0k/UT0hPI-X3mI/AAAAAAAAAXo/FlV-Mk-aAxg/s1600/barclayCTA.jpg" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><br /></a></div> <br /><br />This is just a very short comment related to discussions I've been having with a friend about trend following funds and a lot of the recent blogs and debates proclaiming the death of trend following.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-zOM6lysya0k/UT0hPI-X3mI/AAAAAAAAAXs/TPfI116xnuw/s1600/barclayCTA.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="267" src="http://3.bp.blogspot.com/-zOM6lysya0k/UT0hPI-X3mI/AAAAAAAAAXs/TPfI116xnuw/s400/barclayCTA.jpg" width="400" /></a></div><br /><div style="text-align: left;"> Fig. 1 Barclay CTA Index</div><br />Using the Barclay CTA Index as a proxy, we can certainly see that there was a huge level shift of performance from around the 1990s and onwards, making the annual return appear to be on an almost exponential decay. However from about 1990 to present, the annual returns have have been steadily oscillating in a range band from around -1% to 13% (with some outliers), and although the return per decade has been dropping over the last few decades, there's not really enough data to make any strong judgements about it's demise. Just looking at the oscillatory behavior and considering the bearishness towards this class of funds-- It might just suggest a good reversion based long bet on trend-followers.Intelligent Tradinghttp://www.blogger.com/profile/17765336450326139518noreply@blogger.com3tag:blogger.com,1999:blog-107568321062020427.post-90981472253849099432013-01-04T18:55:00.000-08:002013-01-04T21:16:33.301-08:00IBS reversion edge with QuantShareHappy New Years to readers; my resolution this year is to continue delivering thoughts and ideas to others in the hopes that we all might be able to benefit somewhat from sharing observations. I'll start by describing an edge using <a href="http://www.quantshare.com/">QuantShare</a> as the back-testing engine.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-o_VOVjXbJR0/UOd4Kdo2OrI/AAAAAAAAAWQ/SB_7u1LqiTE/s1600/IBSoverfit.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="313" src="http://4.bp.blogspot.com/-o_VOVjXbJR0/UOd4Kdo2OrI/AAAAAAAAAWQ/SB_7u1LqiTE/s400/IBSoverfit.jpg" width="400" /></a></div> Fig 1. Optimized (overfit) SPY IBS Long run Performance<br /><br /><div class="separator" style="clear: both; text-align: center;"></div><br />Have you ever had an edge that worked fairly well and then suddenly found that it was almost simultaneously revealed across several sources? This seemed to be the case with an edge I had discovered some time back via data mining. I first noticed it was divulged in a recent copy of Active Trader, '<a href="http://www.activetradermag.com/index.php/c/Trading_Strategies/d/The_low-close_edge">The low-close edge</a>,' by Nat Stewart. Later, I found the concept had also spread out into the blogosphere. Two notable write-ups can be found <a href="http://adaptivetrader.wordpress.com/2012/12/28/cumulative-ibs-indicator/">here </a>and <a href="http://qusma.com/2012/11/06/closing-price-in-relation-to-the-days-range-and-equity-index-mean-reversion/">here</a>. The acronym IBS seems to be the buzzword floating around, so I'll continue to use it here. IBS stands for Internal Bar Strength (not to be confused with the IBS that many traders might have developed over the years). The strength indicator is described with an extremely simple equation:<br /><br />$IBS = \dfrac{Close - Low}{High -Low}$<br /><br />What it describes is the relative position of the close with respect to the low to high range of the period. When Jaffray Woodriff was interviewed in the latest Hedge Funds Wizards book, he described a very simple predictive indicator (based only upon transformations of the Open, High, Low, and Close of the data) that had proved remarkably stable over the years. It inspired some <a href="http://stats.stackexchange.com/questions/31513/new-revolutionary-way-of-data-mining">debate</a> in statistical and machine learning circles, but nevertheless, sparks images of a holy grail. If there was ever a hypothesis model that came close to his description, I'd certainly consider this as a candidate for the reversion side. In addition to simplicity, one of the reasons it is so useful is that unlike many other approaches at feature transformation of raw financial series, it is scale invariant and does not require any further scaling to support non-stationary data. The results of the transformation will always be bound between 0 and 100%. So the transformed features will always be bound inside of a fixed and finite space regardless of the evolving data properties (a great property for machine learning). The algorithm runs very fast and does not require frequent readjusting of model parameters unlike many online or econometric based models.<br /><br />The system simply buys at the close when the IBS indicator closes near the low end of the day and goes short when the indicator closes near the high of the day; exit is next day close. Much of the time the indicator is neutral or no trade, allowing a good net risk adjusted return with low exposure. The thresholds, while often mentioned as being set to the 0.25 and 0.75 quartiles of the range, can be adjusted or found manually in the optimization settings.<br /><br /><span class="tag"></span><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-nb0oJZ6fWOc/UOd4ezKOdqI/AAAAAAAAAWY/KZjgAzprBYE/s1600/quantISIBS.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="318" src="http://3.bp.blogspot.com/-nb0oJZ6fWOc/UOd4ezKOdqI/AAAAAAAAAWY/KZjgAzprBYE/s400/quantISIBS.jpg" width="400" /></a></div> Fig 2. Performance fit In Sample Optimization (to 2000).<br /><br />In order to avoid hindsight bias and over-fitting error (as in Fig 1.), I show an optimization using only in sample data for SPY (yahoo data) up to the year 2000 (rank sorted by CAGR and Sharpe). One interesting thing we notice is that the higher threshold is actually optimized to 1, meaning no reversion from the high/short side. This is consistent with what we would expect with a long bias/drift market. We always have to be careful about systematic shorting with a market that has long term positive drift. <br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-B_fvpGLNWvI/UOe3DGDkyqI/AAAAAAAAAXA/959TTkrKSnc/s1600/oos.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="302" src="http://1.bp.blogspot.com/-B_fvpGLNWvI/UOe3DGDkyqI/AAAAAAAAAXA/959TTkrKSnc/s400/oos.jpg" width="400" /></a></div><br /> Fig 3. In Sample/ Out of Sample Performance<br /> with In Sample fitted parameters.<br /><br />Fortunately, even without the short high reversion side, the system performed well for the rest of the out of sample data (Fig 3.). A last comment is that looking at the over-fit data should gives us some insight about reversion systems and high volatility sell off regimes.<br /><br /><br />I've attached code to allow readers to repeat results.<br /><br /><script src="http://pastebin.com/embed_js.php?i=BcxeNTqt"></script><br />Simulated back-test results long only. $10,000 Principle. No Slippage/Comm. incl.<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-8HQOIQMBO2M/UOeJh-9VpPI/AAAAAAAAAWw/-lhrVmXi-Ag/s1600/IBSlongresults.jpg" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="291" src="http://2.bp.blogspot.com/-8HQOIQMBO2M/UOeJh-9VpPI/AAAAAAAAAWw/-lhrVmXi-Ag/s320/IBSlongresults.jpg" width="320" /></a></div><br />Intelligent Tradinghttp://www.blogger.com/profile/17765336450326139518noreply@blogger.com10tag:blogger.com,1999:blog-107568321062020427.post-24208471297796885272012-10-26T19:05:00.000-07:002012-10-27T00:12:59.871-07:00Book Review: R for Business Analytics, A Ohri<div style="text-align: left;"><br /> I've added a recently released book to my list of recommendations (at the amazon carousel to the right), as I've reviewed a copy provided to me via Springer Publishers. The book is <i>R for Business Analytic</i>s, authored by A Ohri. Mr. Ohri provides us with a brief background of his own journey as a business analytics consultant, and shares how R helped complement his work with a very low cost (time to learn the software) and very large benefits. At the outset, he emphasizes that the book is not geared towards statisticians, but more towards practicing business analytics professionals, MBA students, and pragmatically oriented R neophytes and professionals alike. In addition, there is a focus on using GUI oriented tools towards assisting users in quickly getting up to speed and applying business analysis tools (Rattle, for example, is covered as an alternative to Weka, which has been covered here previously). In addition, he provides numerous interviews with well known company representatives who have either successfully integrated R into their own development flow (including JMP/SAS, Google, and Oracle ), or found that large groups of customers have utilized R to augment their existing suite of tools. The good news is that many of the large companies do not view R as a threat, but as a beneficial tool to assist their own software capabilities.<br /><br /> After assisting and helping R users navigate through the dense forest of various GUI interface choices (in order to get R up and running), Mr. Ohri continues to handhold users through step by step approaches (with detailed screen captures) to run R from various simple to more advanced platforms (e.g. CLOUD, EC2) in order to gather, explore, and process data, with detailed illustrations on how to use R's powerful graphing capabilities on the back-end.<br /><br /> The book has something for both beginning R users (who may be experienced in data science, but want to start learning how to apply R towards their field), and experienced R users alike (many, like myself, may find it useful to have a very broad coverage of the myriad number of packages and applications available, complemented by quickly accessible tutorial based illustrations). In summary, the book has an extremely broad coverage of R's many packages that can be used towards business data analysis, with a very hands on approach that can help many new users quickly come up to speed and running on utilizing R's powerful capabilities. The only potential down-side is that covering so many topics, comes at a cost of sacrificing some depth and mathematical rigor (leaving the door open for readers to pursue several more specialized R texts).</div>Intelligent Tradinghttp://www.blogger.com/profile/17765336450326139518noreply@blogger.com1tag:blogger.com,1999:blog-107568321062020427.post-45403936965176877782012-08-22T14:52:00.000-07:002012-08-23T15:04:03.108-07:00The Kaggle BugIf you have any interest in data mining and machine learning, you might have already caught the Kaggle bug.<br /><br />I myself fairly recently got caught up in following the various contests and forums after reading a copy of "Practical Time Series Forecasting," -- 2nd edition, by<br />Galit Shmueli. What makes the contests great are that they allow any ambitious and creative data scientist or amateur enthusiast to participate in and learn a wealth of new knowledge and tricks from more experienced professionals in the field.<br /><br />What should make it even more interesting to readers here is considering that many of the winners that participate in these high purse contests are often from the financial world. Take one of my personally inspirational traders, Jaffray Woodriff, hedge fund manager of well-known machine learning oriented hedge fund, Quantitative Investment Management (better known by its acronym - QIM). I had mentioned recently to a surprised friend, that Mr. Woodriff had also participated in the more well-known Netflix prediction contest (having been a member of the third-place team at one point). <br /><br />In particular, the most recent contest that has many eager followers watching is the $3,000,000 Heritage Provider --Heritage Health Prize Competition, which is an open contest to predict likelihood of patient hospital admission. What particularly inspired this blog post is a very useful blog from one of the leading contestants, Phil Brierley a.k.a. handle, Sali Mali, who has interestingly joined with the marketmaker team, also affiliated with a prediction related fund. Mr. Brierley has shared tremendously useful insights about practical methods of attacking the problem-- all the way from SQL preprocessing and cleaning to intuitive visualization methodologies. I applaud him for his generous sharing of insights to the rest of the predictive analytics community. Although he hasn't posted in a while, his journal of thoughts are still highly useful.<br /><br />Anyone looking for grubstake could certainly use three million to get started=)<br /><br />Below are the specific links mentionedâ€¦<br /><br /><a href="http://anotherdataminingblog.blogspot.com/">http://anotherdataminingblog.blogspot.com/</a><br /><a href="http://www.heritagehealthprize.com/c/hhp">http://www.heritagehealthprize.com/c/hhp</a><br /><a href="http://www.kaggle.com/">http://www.kaggle.com/</a><br /><br />...and one newer from stack exchange<br /> <a href="http://blog.stackoverflow.com/2012/08/stack-exchange-machine-learning-contest/?cb=1">http://blog.stackoverflow.com/2012/08/stack-exchange-machine-learning-contest/?cb=1</a><br /><br /><br /> Intelligent Tradinghttp://www.blogger.com/profile/17765336450326139518noreply@blogger.com0tag:blogger.com,1999:blog-107568321062020427.post-49030750563302178302012-05-30T23:02:00.001-07:002012-05-31T00:49:13.887-07:00The Facebook Doomsday WatchI've been following the myriad circus of Facebook commentators and bystanders pointing to its horrific failed IPO launch and seemingly inevitable crash to zero. While my focus here isn't really so much on fundamentals or basic TA; I do want to comment on some subjective thoughts on the matter as well as illustrate one catchy graphic I put together.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-DwbZQp5le4M/T8cGH14BwoI/AAAAAAAAAVc/uSAxlsPxA9M/s1600/ebfbipo.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="400" src="http://1.bp.blogspot.com/-DwbZQp5le4M/T8cGH14BwoI/AAAAAAAAAVc/uSAxlsPxA9M/s400/ebfbipo.jpg" width="275" /> </a></div><div class="separator" style="clear: both; text-align: center;">Fig 1. FB IPO drawdown (with potential trajectory) vs. EBAY historical IPO opening (a.k.a the U-TURN pattern).</div><div class="separator" style="clear: both; text-align: center;"><br /></div>Having lived through and experienced the many ballyhooed IPO juggernauts of the past, I can't help but think back to how overvalued I 'felt' stocks like Ebay and Google felt to me at the outset. We all know that that we can't directly compare such small samples in any statistical manner with much conviction, but that qualitative sense in me feels that Facebook is one of those Wall Street darlings we rarely encounter and wish we could go back and buy at a discount. Sure there were the megaflops (Blackstone, The Globe, etc) that never revived quite back, but then again consider the Lynch method of buying (...are the masses using it?), the massive institutional support available, the number of shorts that are sure to pile on, and more importantly, the nagging fact that it is consistently one of the highest viewed websites of all (typically above or next to Google and Baidu -- don't believe me, check Alexa). Ok, but enough of the soapbox on those biased musings-- one quantitative comparison to consider in the chart above is how Ebay fared at the outset and was also lambasted as a failure throughout Internet chat-rooms and various media pundits. What I have graphed is a drawdown for both (with adjusted ebay quotes) relative to the 1st day open bid. The last points on Facebook are potential drawdown (relative to IPO opening price) trajectories at<br /><br /><div class="separator" style="clear: both; text-align: left;"><style> <!-- BODY,DIV,TABLE,THEAD,TBODY,TFOOT,TR,TH,TD,P { font-family:"Arial"; font-size:x-small } --> </style> </div><table border="0" cellspacing="0" cols="2" frame="VOID" rules="NONE"> <colgroup><col width="86"></col><col width="86"></col></colgroup> <tbody><tr> <td align="RIGHT" height="17" width="86"><span style="font-family: Times New Roman;">28.84</span></td> <td align="RIGHT" width="86">-31.41% (yesterday's close)</td> </tr><tr> <td align="RIGHT" height="17">26.5</td> <td align="RIGHT">-36.98%</td> </tr><tr> <td align="RIGHT" height="17">24</td> <td align="RIGHT">-42.93%</td> </tr><tr> <td align="RIGHT" height="17">23</td> <td align="RIGHT">-45.30%</td><td align="RIGHT"><br /></td><td align="RIGHT"><br /></td><td align="RIGHT"><br /></td><td align="RIGHT"><br /></td><td align="RIGHT"><br /></td><td align="RIGHT"><br /></td><td align="RIGHT"><br /></td><td align="RIGHT"><br /></td><td align="RIGHT"><br /></td><td align="RIGHT"><br /></td><td align="RIGHT"><br /></td><td align="RIGHT"><br /></td><td align="RIGHT"><br /></td><td align="RIGHT"><br /></td><td align="RIGHT"><br /></td><td align="RIGHT"><br /></td><td align="RIGHT"><br /></td><td align="RIGHT"><br /></td><td align="RIGHT"><br /></td><td align="RIGHT"><br /></td><td align="RIGHT"><br /></td><td align="RIGHT"><br /></td><td align="RIGHT"><br /></td><td align="RIGHT"><br /></td><td align="RIGHT"><br /></td><td align="RIGHT"><br /></td><td align="RIGHT"><br /></td><td align="RIGHT"><br /></td><td align="RIGHT"><br /></td><td align="RIGHT"><br /></td><td align="RIGHT"><br /></td><td align="RIGHT"><br /></td><td align="RIGHT"><br /></td><td align="RIGHT"><br /></td><td align="RIGHT"><br /></td><td align="RIGHT"><br /></td><td align="RIGHT"><br /></td><td align="RIGHT"><br /></td><td align="RIGHT"><br /></td><td align="RIGHT"><br /></td><td align="RIGHT"><br /></td><td align="RIGHT"><br /></td><td align="RIGHT"><br /></td><td align="RIGHT"><br /></td><td align="RIGHT"><br /></td><td align="RIGHT"><br /></td><td align="RIGHT"><br /></td><td align="RIGHT"><br /></td><td align="RIGHT"><br /></td><td align="RIGHT"><br /></td><td align="RIGHT"><br /></td><td style="text-align: left;"><br /></td><td style="text-align: left;"><br /></td><td style="text-align: left;"><br /></td><td style="text-align: left;"><br /></td> </tr></tbody></table><br />So, I leave you with that as food for thought. I don't often discuss my thoughts about semi qualitative opportunities, but then again, we don't get these types of juggernaut long term opportunities all that often.* As always, please make your own informed decisions, and I'll try to get back on topic... one of these days.<br /><br />* Two other counter points (amongst many excluded) that I'm sure some more astute observers will note.<br />1) Ebay IPO U-Turn occurred during the mega bull run dot com mania. <br />2) If the Greece (or insert any suitable catalyst here) fiasco escalates into a fat tail flight to safety avalanche (of which I pointed out have been exceedingly abundant of late); then keep in mind Facebook and any other equity leaders should be expected to plunge together; hence, the emphasis on LONG term portfolio component opportunity.<br /><br /><br />Intelligent Tradinghttp://www.blogger.com/profile/17765336450326139518noreply@blogger.com0tag:blogger.com,1999:blog-107568321062020427.post-75186192795184444832012-02-28T21:59:00.003-08:002012-02-29T12:05:50.363-08:00Expanding Visualization of published system edges (R)I happened to be looking over a revised text of a systems author I happen to follow. I will be a bit vague about specifics, as the system itself is based on well known ideas, but I'll leave the reader to research related systems. The basic message illustrated in this post, is that I often make an effort to look at different viewpoints of system related features that are not always explored in the texts.<br /><br /><div class="separator" style="clear: both; text-align: center;"></div><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-w3r8qjg_PLY/T0212ZuOQ4I/AAAAAAAAAVA/-RWMpwaMsCo/s1600/eq1.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="320" src="http://4.bp.blogspot.com/-w3r8qjg_PLY/T0212ZuOQ4I/AAAAAAAAAVA/-RWMpwaMsCo/s400/eq1.jpg" width="400" /></a></div>Fig 1. BarGraph Illustration showing 0.48% average weekly gain given conditional system parameters, over arbitrary trades giving 0.2% average return per week over same 14 yr. period.<br /><br />For example, the following system is based upon buying at pullbacks of a certain equity series and holding for a week. In the book, a bargraph is shown illustrating the useful edge of about 0.48%/trade vs. simply buying and holding for 0.2%/trade. Although, the edge is useful and demonstrated well in the bargraph illustration, it can be useful to look at the system performance from various different perspectives. <br /><br />As an example, we might wonder how the system unfolded over time. In order to look at this, we can plot a time series representation of the system's equity curve (assuming 100% compounding, no slippage, and no fractional sizing). The curve is shown compared to a straw-broom plot of 100 monte carlo simulation paths of the true underlying data, comprised of randomly and uniformly selected data over the same period.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-QVVgBKUe4gE/T0214FjfQjI/AAAAAAAAAVI/wC0_QXUeybg/s1600/eq2.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="315" src="http://3.bp.blogspot.com/-QVVgBKUe4gE/T0214FjfQjI/AAAAAAAAAVI/wC0_QXUeybg/s400/eq2.jpg" width="400" /> </a></div><div class="separator" style="clear: both; text-align: center;"><br /></div><div class="separator" style="clear: both; text-align: center;">Figure 2. Plot of edge based equity path vs. simulated Monte Carlo Straw-Broom Plots of randomly selected series based upon true underlying data.</div><div class="separator" style="clear: both; text-align: center;"><br /></div><div class="separator" style="clear: both; text-align: left;">Looking at the 'unwinding' of the actual system's 14yr. time series path, we can make a few observations.</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">1) The edge, in terms of terminal wealth only, far outperforms several randomly simulated data paths built from the actual instrument.</div><div class="separator" style="clear: both; text-align: left;">2) Unfortunately, the edge also has very wild swings and variation (resulting in a very large drawdown). </div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">Had we blindly selected the system itself based upon the bar graph alone, it's very possible, that we could have entered at the worst possible time (just prior to the drawdown). It also illustrates an issue I personally have with using simple monte carlo analysis (with IID assumptions) as a proxy to underlying data. Namely, that the auto-correlation properties have been filtered out, making the system based edge appear much better in comparison. I have spent a lot of time thinking about ways to deal with this issue; but it's a discussion for another day. But it's not necessarily a bad result at all. Rather it gives us some features (persistence and superior edge/trade) that we can use as a spring-board for further optimization; for example, we might think about adding a conditional filter to mitigate the large drawdown based upon underlying features that may have coincided during that period.</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">Data was plotted using R ggplot2. Although I think the plotting tool is excellent, I find that the processing time is a bit consuming.</div><br /><br /><br />Intelligent Tradinghttp://www.blogger.com/profile/17765336450326139518noreply@blogger.com4tag:blogger.com,1999:blog-107568321062020427.post-89167117742259300562012-01-31T23:21:00.000-08:002012-01-31T23:53:50.056-08:00MINE: Maximal Information-based NonParametric Exploration<br />There was a lot of buzz in the blogosphere as well as the science community about a new family of algorithms that are able to find non-linear relationships over extremely large fields of data. What makes it particularly useful is that the measure(s) it uses are based upon mutual information rather than standard pearson's correlation type measures, which do not capture non-linear relationships well. <br /><br />The (java based) software can be downloaded here: http://www.exploredata.net/ In addition, there is the capability to directly run the software from R.<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-_twTNhFO-Ko/TyjqVlh-aFI/AAAAAAAAAU0/vWh1JKRoOMs/s1600/shot3.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="400" src="http://2.bp.blogspot.com/-_twTNhFO-Ko/TyjqVlh-aFI/AAAAAAAAAU0/vWh1JKRoOMs/s400/shot3.png" width="396" /></a></div><br /><div class="separator" style="clear: both; text-align: center;"></div><div style="text-align: center;"> Fig 1. Typical non-linear relationship exemplified by intermarket relationships.</div><br />The algorithm seems promising as it would allow us to possibly mine very large data sets (such as financial intermarket relationships) and find potentially meaningful non-linear relationships. If we were to use the typical pearson's correlation measures, such relationships would show very small R^2 values, and thus be discarded as non significant relationships.<br /><br />I decided to take it for a spin on an example of a non-linear example, taken from M. Katsanos' book on intermarket trading strategies (p 25. fig 2.3). In figure 1, we can clearly see that the relationship between markets is non-linear, and thus the traditional linear fit returns a low R^2 value of .143 (red line), a loess fit is also shown in blue. After running the same data through MINE, the results returned in a .csv file, were... <br /><br /><br />MIC (strength) MIC-p^2 (nonlinearity)<br />0.16691002 0.62445 7.129283 -0.3777441<br /><br /><br />The MIC (Mutual Information Coefficient) of .167 was not much greater than theR^2 measure of .143 above. However, one of the mentions in the paper was that as the signal becomes more obscured by noise, the MIC will degrade comparably. <br /><br />The next step would be too find some type of fit to minimize the noise component and make updated comparisons.<br /><br />In order to show a better illustration of how useful it might be, I am attaching a screenshot of the reference material here.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-dV7DmXqXn-U/Tyjm-MUe5cI/AAAAAAAAAUs/YL2pCDV5kCg/s1600/paper_Ex.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="400" src="http://4.bp.blogspot.com/-dV7DmXqXn-U/Tyjm-MUe5cI/AAAAAAAAAUs/YL2pCDV5kCg/s400/paper_Ex.png" width="342" /></a></div><br /><br />Figure 2. Reproduced from Fig 6. 'www.sciencemag.org/cgi/content/full/334/6062/1518/DC1'<br /><br />Notice the MIC Score measure outperforms other traditional methods on many non-linear structural relationships.<br /><br />Here is the full R-Code to repeat the basic experiment.<br />###############################################<br /># MINE example from intelligenttradingtech.blogspot.com 1/31/2012<br /><br />library(quantmod)<br />library(ggplot2)<br /><br />getSymbols('^GSPC',src='yahoo',from='1992-01-07',to='2007-12-31')<br />getSymbols('^N225',src='yahoo',from='1992-01-07',to='2007-12-31')<br /><br />sym_frame<-merge(GSPC[,6],N225[,6],all=FALSE)<br />names(sym_frame)<-c('GSPC','N225')<br /><br />p<-qplot(N225, GSPC, data=data.frame(coredata(sym_frame)),<br />geom=c('point'), xlab='NIKKEI',ylab='S&P_500',main='S&P500 vs NIKKEI 1992-2007')<br /><br />fit<-lm(GSPC~ N225, data=data.frame(coredata(sym_frame)))<br />summary(fit)<br />fitParam<-coef(fit)<br /><br />p+geom_abline(intercept=fitParam[1], slope=fitParam[2],colour='red',size=2)+geom_smooth(method='loess',size=2,colour='blue')<br /><br />### MINE results<br />library("rJava")<br />setwd('/home/self/Desktop/MINE/')<br /><br />write.csv(data.frame(coredata(sym_frame)),file="GSPC_N225.csv",row.names=FALSE)<br />source("MINE.r")<br />MINE("GSPC_N225.csv","all.pairs")<br /><br />##########################################################<br /><br />The referenced paper is, "Detecting Novel Associations in Large Data Sets"<br />David N. Reshef, et al.<br />Science 334, 1518 (2011)<br /><br /><br />As an aside, I've been hooked on a sitcom series called, "Numb3rs," playing on Amazon Prime. It's about an FBI agent who gets assistance from his genius brother, a professor of Mathematics at a prestigious University. So far, they've discussed markov chains, bayesian statistics, data mining, econometrics, heat maps, and a host of other similar concepts applied to forensics.Intelligent Tradinghttp://www.blogger.com/profile/17765336450326139518noreply@blogger.com11tag:blogger.com,1999:blog-107568321062020427.post-41770526827645569752012-01-01T01:33:00.000-08:002012-01-01T01:40:57.264-08:00Free Online Stanford Machine Learning Course: Andrew Ng. Post Mortem.Happy New Year to all the viewers of this blog and just a short reminder that the course will be available again this January.<br />http://www.ml-class.org/course/auth/welcome<br /><br />Having audited the course, I would highly recommend it to anyone who is interested in a very hands on learning session covering many of the topics I've posted about (and many other areas, such as how to deal with over/under fitting). Kudos to Dr. Ng for a fantastic, engaging, and informative course.<br /><br />As an added incentive, users will become familiarized with many vectorized approaches to programming (via Octave), which are very useful in languages such as Python and R.Intelligent Tradinghttp://www.blogger.com/profile/17765336450326139518noreply@blogger.com6tag:blogger.com,1999:blog-107568321062020427.post-84362964099538028082011-10-14T18:02:00.000-07:002011-10-14T18:17:43.591-07:00Free auditing of Stanford AI and Machine Learning Courses w/Peter NorvigJust wanted to notify viewers of a few great courses that are being offered free for auditing and/or participation by well known industry experts, including co-author of the classic text on AI, 'Artificial Intelligence: A Modern Approach,' Peter Norvig and Prof. Andrew Ng.<br /><br />http://www.ai-class.com/<br />see also,<br />http://www.sfgate.com/cgi-bin/article.cgi?f=/c/a/2011/10/14/BUFR1LH9JR.DTL<br /><br />The notice is a bit late, but they are still accepting registrations.Intelligent Tradinghttp://www.blogger.com/profile/17765336450326139518noreply@blogger.com3tag:blogger.com,1999:blog-107568321062020427.post-53526082680474315202011-10-06T12:12:00.000-07:002011-10-06T22:42:30.673-07:00Spatio-Temporal Data Mining: 2There are many visual methods used to identify patterns in space and time. I've discussed some in prior threads and will show a few others briefly here. One of the most difficult questions I often hear from others regarding markov type approaches, is how to identify states to be processed.<br /><br />It is a similar problem that one encounters using simple linear type factor analysis. Unfortunately, there is no simple answer; however, because data streams are becoming so vast it becomes almost impossible to enumerate over all possible state sets. Visual mining techniques can be incredibly helpful in narrowing down that space as well as feature reduction. I often use these types of visualizations back and forth with unsupervised classification type learners to converge on useful state identifications.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/--FsPfrUfg3M/To33ZNFdMKI/AAAAAAAAASk/yl4AjAGtU5I/s1600/spctmp1.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="363" src="http://1.bp.blogspot.com/--FsPfrUfg3M/To33ZNFdMKI/AAAAAAAAASk/yl4AjAGtU5I/s640/spctmp1.png" width="640" /></a></div><br /> Fig 1. Spatio-Temporal State plot<br /><br />Figure 1 gives an idea on visualizing states with respect to time. But having such knowledge in isolation doesn't give us much use. We are more interested in looking for Bayesian type relationships between states that give some probabilities between linked states in time.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-9J6B_6gCevY/To34DkJU78I/AAAAAAAAASs/FhWTvFCBxjs/s1600/fluc1.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="400" src="http://2.bp.blogspot.com/-9J6B_6gCevY/To34DkJU78I/AAAAAAAAASs/FhWTvFCBxjs/s400/fluc1.png" width="382" /></a></div><br /> Fig 2. Fluctuation Plot<br /><br />Several visual methods exist to capture the relationships visually. One common plot used in language processing and information theory, is a fluctuation plot. The above plot was built using the same state data as the first graph. It is often used to determine conditional relationships between symbols such as alphabet tokens. The size of each box is directly proportional to the weight of the transition probabilities between row and column states in tabular data. An example would be to think of the letters yzy more commonly followed by g (as in syzygy) than any other state token; thus, one would expect to quickly spot a larger box across a row of states representing the 'yzy' row token n-gram and 'g' column token .<br /><br />Both plots were produced in R. ggflucuation() is a plot command utilized from ggplot2. I am currently investigating how much easier and faster it might be to process such visualizations in tools like protovis and processing. I've been especially inspired by reading some of Nathan Yau's excellent visualization work in his book, 'Visualize This.' I included it in the link to the right for interested readers.Intelligent Tradinghttp://www.blogger.com/profile/17765336450326139518noreply@blogger.com0tag:blogger.com,1999:blog-107568321062020427.post-6757487112758602562011-09-23T17:49:00.000-07:002011-09-26T19:39:27.137-07:00Arc Diagram and spatiotemporal data mining visualizationI won't spend too much time discussing this fascinating topic other than to say it relates very much to prior discussions about pattern discovery via visual data mining (see lexical dispersion plots for example). I happened across an interesting visualization method called the Arc Diagram, developed by Martin Wattenberg. Working for data visualization groups at IBM and later Google, he developed some interesting visual representations of spatiotemporal data. <br /><br /><div class="separator" style="clear: both; text-align: center;"></div><div class="separator" style="clear: both; text-align: center;"></div><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-xgIl6E9wgv8/Tn0VnOB5B3I/AAAAAAAAASc/t5_BFlzh-O0/s1600/arcd1.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="640" src="http://1.bp.blogspot.com/-xgIl6E9wgv8/Tn0VnOB5B3I/AAAAAAAAASc/t5_BFlzh-O0/s640/arcd1.jpg" width="378" /></a></div><br />Fig 1. Arc Diagram and legend with example of discretized pattern archetype. <br /><br />The resulting plot generates some fascinating temporal signatures, similar to what one might see in phase-space portraits from chaos. However, they have been frequently utilized to look for spatiotemporal signatures in music. One might discern a type of underlying order or visual signature of complexity as well as recurring patterns in sequential objects ranging from text based lyrical information to musical sheet notes.<br /><br /> Figure 1 shows an example of how one might utilize this tool towards temporal pattern discovery in time series. A weekly series from SPY has been discretized into alphabet tokens, based upon the bin ranges in the included legend. The small chart in the example would decode an archetypal pattern for the following sequence: ECDCECCD, into a time series representation of the 8 week data symbol. The following interactive java tool from another blogger, Neoformix, was then used to translate the data into an Arc Diagram. http://www.neoformix.com/Projects/DocumentArcDiagrams/index.html . Read from top to bottom, one can look at recurring and related patterns that are repeated over time; certain behavior might warrant further investigation.<br /><br />You can copy the following data stream into the tool to toy around with the tool to get a feel for the possibilities of visual pattern discovery.* I won't go into too much more detail about utilizing it, other than to say it appears to be a very useful tool in temporal based pattern discovery.<br /><br />Please see the following for more ideas on arc diagrams and musical signatures:<br />http://www.research.ibm.com/visual/papers/arc-diagrams.pdf <br /><br />http://turbulence.org/Works/song/mono.html<br /><br />Blog mentioned:<br />http://www.neoformix.com/<br /><br />* Not sure how to attach .xls file here, but if anyone wants a copy of the .xls file, you can send me an email and I'll try to get it out to you. Otherwise, you can simply grab a song lyric off the web to play with the tool.Intelligent Tradinghttp://www.blogger.com/profile/17765336450326139518noreply@blogger.com2tag:blogger.com,1999:blog-107568321062020427.post-33601517115656064982011-08-04T15:44:00.000-07:002011-08-04T15:53:39.936-07:00Aug 4, 2011 "plunge" headlines are in the air tonightToday's financial headlines are littered with the word 'plunge.' Considering today's (cl-cl) drop on the S&P500 was just about -5%, I don't know that I would exactly call that a plunge.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-cAGZ6fj05gw/TjsfOuQf8II/AAAAAAAAASI/Wx34eYk5ZTs/s1600/plungeblog.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="260" src="http://4.bp.blogspot.com/-cAGZ6fj05gw/TjsfOuQf8II/AAAAAAAAASI/Wx34eYk5ZTs/s400/plungeblog.png" width="400" /></a></div><br /><div class="separator" style="clear: both; text-align: center;"></div> Fig 1. Historical ts plot of S&P500 returns <= -5%<br /><br />The following R code produced a time series plot of historical occasions where this occurred.<br /><br /><code></code><code class="bash plain">###################################################</code><code class="bash plain"></code><br /><br />library(quantmod)<br /><br />getSymbols("^GSPC",from="1950-01-01",to="2012-01-01")<br />adj<-GSPC$GSPC.Adjusted<br />rtn<-(adj/lag(adj,1)-1)[2:length(adj)]<br />r05<-rtn[rtn<= -.05]<br /><br />plot(sort(r05),type='o',main='S&P500 1950-present returns <= -5%')<br /><code class="bash plain"></code><br /><code class="bash plain">###################################################</code><code class="bash plain"></code><br />Although the frequency of such occurrences is arguably rare, the 1987 drop is much more worthy of the 1 day label 'plunge.'<br /><br />One other disturbing observation in the data, however, is the large temporal clustering of occurrences in the recent 2008 region. Now that's behavior to be concerned about (not to mention revised flash crash data pts.).<br /><br />filtered 1 day cl-cl returns <=-5% sorted by date<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-Cvz8Ywx-xV8/Tjsi3pn-5SI/AAAAAAAAASM/S6ylMDk298w/s1600/rtns.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="400" src="http://3.bp.blogspot.com/-Cvz8Ywx-xV8/Tjsi3pn-5SI/AAAAAAAAASM/S6ylMDk298w/s400/rtns.png" width="210" /></a></div>Intelligent Tradinghttp://www.blogger.com/profile/17765336450326139518noreply@blogger.com4tag:blogger.com,1999:blog-107568321062020427.post-16048368814545414122011-07-28T14:54:00.000-07:002011-07-28T18:25:09.512-07:00Pattern Recognition: forward Boxplot Trajectories using RAlthough the following discussion can apply to the Quantitative Candlestick Pattern Recognition series, it is addressing the same issue as any basic conditional type system -- how and when to exit. The following is one way to visualize and think about it, and is by no means optimal.<br /><br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-bqEOyXAJwAs/TjILK4Ef1lI/AAAAAAAAASA/tw0eGjADDiI/s1600/traj4.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="320" src="http://4.bp.blogspot.com/-bqEOyXAJwAs/TjILK4Ef1lI/AAAAAAAAASA/tw0eGjADDiI/s320/traj4.png" width="319" /></a></div><br /> Fig 1. Posterior Boxplot Trajectory<br /><br />Often we attempt to find some set of prior input patterns that leads to profitable posterior outcomes. However, in most of the available examples, we are typically only given heuristics and rules of thumb on where to exit. This might make sense, since no one can accurately predict where to exit. However, with knowledge of past samples, we can have some idea of where a good target to exit might be, given the prior knowledge of forward trajectories. I dubbed the name 'boxplot trajectory', here, as I think it's a useful way to visualize a group of many possible outcome trajectories for further analysis.<br /><br />In this example, a set of daily price based patterns was analyzed via a proprietary program I wrote in R, which resulted in an input pattern yielding a set of 52 samples that met my conditional criteria. Fig 1 illustrates a way to look at the trajectory outcomes based upon one of the profitable patterns in the conditional criteria. The bottom graph is simply the plot of median results of each data point in the trajectory. We often try to imagine the best way to exit without foreknowledge of the future (and somewhat less rule of thumb based criteria).<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-gE67oXWDIF8/TjHGL965KmI/AAAAAAAAARk/oICjq4Hq80w/s1600/traj3.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="262" src="http://2.bp.blogspot.com/-gE67oXWDIF8/TjHGL965KmI/AAAAAAAAARk/oICjq4Hq80w/s320/traj3.png" width="320" /></a></div> Fig 2. Trajectory tree.<br /><br />One approach would be to use some type of exiting rule based upon the statistical median of each sequential point's range. Knowing that 1/2 of the vertices occur above and 1/2 below the median, we should expect to hit at least 1/2 of the targets at or above the median. Given that the 3rd point is the highest median, it makes sense to exit earlier than waiting for a greater gain further out (which has an even lower median). So if we take as a target, the median value of the 3rd pt. we achieve an average and fixed target of 1.59% on 27/52 of the total samples.<br /><br />Of the remaining samples, we may now wish to exit on the 11th bar (or earlier if the same target is hit earlier) target of .556%, which is achieved on 13/52 of the remaining samples. This leaves only the last bar of which we simply use the average return as the weighted return value for that target, in this case -1.74% for the remaining samples : 12/52. Notice we will always have the worse contenders that were put off until the end.<br /><br />The expectation yields E(rtn)=27/52*.0159+13/52*.0056-12/52*-.017 =.0057<br />eeking out a small average + gain of .57%. Compounded, this gives:<br />(1+.0159)^27*(1+.0056)^13*(1-.017)^12~ 34% rtn for 52 trades, each less than 3 days in length. Hit rate (as secondary observation) is 77% in this case.<br /><br />The approach is particularly appealing for a high frequency strategy with very low commissions. Notice it's by no means comprehensive (and yes, I've only shown in sample here), but rather a novel way to think about exiting strategies.Intelligent Tradinghttp://www.blogger.com/profile/17765336450326139518noreply@blogger.com2tag:blogger.com,1999:blog-107568321062020427.post-78428444153379778452011-05-17T15:34:00.000-07:002011-05-23T13:05:59.212-07:00Simulating Win/Loss streaks with R rle functionThe following script allows you to simulate sample runs of Win, Loss, Breakeven streaks based on a random distribution, using the run length encoding function, rle in R. Associated probabilities are entered as a vector argument in the sample function.<br /><br />You can view the actual sequence of trials (and consequent streaks) by looking at the trades result. maxrun returns a vector of maximum number of Win, Loss, Breakeven streaks for each sample run. And lastly, the prop table gives a table of proportion of run transition pairs from losing streak of length n to streak of all alternate lengths.<br /><br />Example output (max run length of losses was 8 here):<br /><br />100*prop.table(tt)<br /> lt.2<br />lt.1 1 2 3 4 5 6 7 8<br /> 1 41.758 14.298 5.334 1.662 0.875 0.131 0.000 0.044<br /> 2 14.692 4.897 1.924 0.787 0.394 0.087 0.131 0.000<br /> 3 4.985 2.405 0.525 0.350 0.000 0.000 0.044 0.000<br /> 4 1.662 0.875 0.306 0.087 0.000 0.000 0.000 0.000<br /> 5 0.831 0.219 0.175 0.000 0.000 0.044 0.000 0.000<br /> 6 0.087 0.131 0.044 0.000 0.000 0.000 0.000 0.000<br /> 7 0.087 0.087 0.000 0.000 0.000 0.000 0.000 0.000<br /> 8 0.044 0.000 0.000 0.000 0.000 0.000 0.000 0.000<br /><br />maxrun<br /> B L W <br /> 3 8 17 <br /><br />-----------------------------------------------------------------------------------------<br />#generate simulations of win/loss streaks use rle function<br /><br />trades<-sample(c("W","L","B"),10000,prob=c('.6','.35','.05'),replace=TRUE)<br />traderuns<-rle(trades)<br />tr.val<-traderuns$values<br />tr.len<-traderuns$lengths<br />maxrun<-tapply(tr.len,tr.val,max)<br /><br />#streaks of losing trades<br />lt<-tr.len[which(tr.val=='L')]<br />lt.1<-lt[1:(length(lt)-1)]<br />lt.2<-lt[2:(length(lt))]<br /><br />#simple table of losing trade run streak(n) frequencies<br />table(lt)<br /><br />#generate joint ensemble table streak(n) vs streak(n+1)<br />tt<-table(lt.1,lt.2)<br />#convert to proportions<br />options(digits=2)<br />100*prop.table(tt)<br />maxrunIntelligent Tradinghttp://www.blogger.com/profile/17765336450326139518noreply@blogger.com2tag:blogger.com,1999:blog-107568321062020427.post-88255194214844236782011-05-10T19:42:00.000-07:002011-05-13T12:37:11.626-07:00High Low Clustering on intraday high frequency sampled dataNothing unusually exciting on this post, but I happened to be engaged in some particle based methods recently and made some simple visual observations as I was setting up some of the sampling environment in R. I am also using Rkward and Ubuntu to generate, so I'm gathering everything from the current environment (including graphics).<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-1biitg2dFjE/Tc2HrRtQghI/AAAAAAAAARQ/MQPIgYOYpHA/s1600/hlstudy1.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="311" src="http://2.bp.blogspot.com/-1biitg2dFjE/Tc2HrRtQghI/AAAAAAAAARQ/MQPIgYOYpHA/s320/hlstudy1.png" width="320" /></a></div><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-1sS7bk3KWZI/Tcn1AXpVFOI/AAAAAAAAARI/RE5fBGuQ8wo/s1600/aaplmaxmin.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><br /></a></div><br />Fig 1. Parallel plot of half hr sample of High and Low intraday data points vs time (Max is purple dots, Min are red). Fig 2. Cumulative count of high low events per interval (blue = total high and low).<br /><br />The plot illustrates sampled intraday data at half hour increments.<br />The highs and lows of each sample interval are overlaid using purple to denote an intraday high and red to denote an intraday low. <br />Interesting points of observation are--<br /><br />1) The high and low samples tend to be clustered at open, midday, and close.<br />2) High and low events do not appear to be uniformly and randomly distributed over time. <br />This kind of data processing is useful towards generating, exploring, and evaluating pattern based setups. <br /><br />The study is by no means complete or conclusive, just stopping by to show more of the type of data processing and visual capabilities that R is capable of. If anyone has done any more conclusive studies I'd be interested to hear.<br /><br />P.S. If anyone notices any odd changes, for some reason Google was having some issues the last few days, and it appears to have reverted to my original (not ready to launch) draft.Intelligent Tradinghttp://www.blogger.com/profile/17765336450326139518noreply@blogger.com4tag:blogger.com,1999:blog-107568321062020427.post-37664212208934561842011-03-08T15:12:00.000-08:002011-03-10T01:47:58.253-08:00Can one beat a Random Walk-- IMPOSSIBLE (you say?)Firstly, apologies for the long absence as I've been busy with a few things. Secondly, apologies for the horrific use of caps in the title (for the grammar monitors). Certainly, you'll gain something useful from today's musing, as it's a pretty profound insight for most (was for me at the time). I've also considered carefully, whether or not to divulge this concept, but considering it's often overlooked and in the public literature (I'll even share a source), I decided to discuss it.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://lh6.googleusercontent.com/-Mkk0T0mo9RQ/TXidwtzvYZI/AAAAAAAAARE/zSZLTfNIffY/s1600/rw.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="310" src="https://lh6.googleusercontent.com/-Mkk0T0mo9RQ/TXidwtzvYZI/AAAAAAAAARE/zSZLTfNIffY/s320/rw.jpg" width="320" /></a></div><div class="separator" style="clear: both; text-align: center;"><a href="https://lh3.googleusercontent.com/-MS01EvZqBkE/TXa0KPeMZCI/AAAAAAAAARA/yTocKd21dsY/s1600/rwit.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><br /></a></div><br />Fig 1. Random Walk and the 75% rule<br /><br />I've seen the same debate launched over and over on various chat boards, which concerns the impossibility of theoretically beating a random walk. In this case, I am giving you the code to determine the answer yourself.<br />The requirements: 1) the generated data must be from an IID gaussian distribution 2) series must be coaxed to a stationary form.<br /><br />The following script will generate a random series of data and follow the so called 75% rule which says,<br />Pr[Price>Price(n-1) & Price(n-1) < Price_median] Or [Price < Price(n-1) & Price(n-1) > Price_median] = 75%. This very insightful rule (which is explained both mathematically and in layman's terms in the book 'Statistical Arbitrage' linked on the amazon box to the right), shows that given some stationary, IID, random sequence that has an underlying Gaussian distribution, the above rule set can be shown to converge to a correct prediction rate of 75%!<br /><br />Now, we all know that market data is not Gaussian (nor is it commision/slippage/friction free), and therein lies the rub. But hopefully, it gives you some food for thought as well as a bit of knowledge to retort, when you hear the debates about impossibilities of beating a random walk. <br /><br />R Code is below. <br /><br />##################################################<br />#gen rnd seq for 75% RULE<br /><br />#generate stationary rw time series<br />rw<-rnorm(100)<br /><br />m<-median(rw)<br />trade<-rep(0,length(rw))<br /><br />for(i in 1:(length(rw)-1)){<br />if(rw[i] < m) trade[i]<- (rw[i+1]-rw[i])<br />if(rw[i] > m) trade[i]<- (rw[i]-rw[i+1])<br />if(rw[i] == m) trade[i]<- 0}<br /><br />eq_curve<-cumsum(trade)<br /><br />par(mfrow=c(2,1))<br />plot(rw,type='l',main='random walk')<br />plot(eq_curve,type='l',main='eq_curve')Intelligent Tradinghttp://www.blogger.com/profile/17765336450326139518noreply@blogger.com44tag:blogger.com,1999:blog-107568321062020427.post-56000034626546995692010-11-19T18:55:00.000-08:002010-11-19T18:55:48.507-08:00Finally! A practical R book on Data Mining: "Data Mining With R, Learning with Case Studies," by Luis TorgoI've been a bit busy lately with a few big things, however, I wanted to stop by and mention a fantastic book for those who have been following along the R examples. Anyone who's followed my blog knows that I'm big on practical books with examples. There are also three main open source tools I've discussed with regards to prototyping trading systems: Weka, Python, and R. Of the three tools mentioned, I've been able to recommend Witten and Frank's book on Data Mining for Weka, and Stephen Marsland's book on Machine Learning as the Python bible for hands on Machine Learning. Well now, I can thankfully complete the trinity, with Luis Torgo's new book, 'Data Mining with R, Learning with Case Studies.'<br /><br />Both R novices and experts will find this a great reference for Data Mining. The opening chapter has a useful intro to get you started on R (Factors, Vectors, and Data Frames, as well as other useful objects are covered with examples). Additional chapters cover both classification and regression type prediction schemes.<br /><br />The most useful chapter to readers here, however, is the chapter on 'Predicting Stock Market Returns.' Many of the readers who have been looking for example scripts on some of the topics I've covered, will find them here. Not only is gathering and processing data (CSV, quantmod and yahoo finance, and MySQL) well covered, but various prediction and evaluation schemes (cross validation, sliding and growing windows, PerformanceAnalytics package) are discussed along with access to the author's code. Many topics I haven't discussed yet are available here as well, including MARS (Multivariate Adaptive Regression Splines), SVMs, and various validation techniques along with handy tabulation of results. Having read a previous draft, I'm still working into the examples, and welcome any feedback and thoughts I can address.<br /><br />The book can be accessed via the amazon book showcase on the right and instructions for R code access are available in the book.Intelligent Tradinghttp://www.blogger.com/profile/17765336450326139518noreply@blogger.com8tag:blogger.com,1999:blog-107568321062020427.post-247665596260792052010-08-10T13:22:00.000-07:002010-08-10T14:17:16.551-07:00Conditioning Systems on Regime VariablesHere is a brief and simple example of switching systems based upon regime type (sometimes called gating). <br /><br />I've brought up the idea of conditioning systems based upon regimes many times in past posts. Some texts call this filtering, although I prefer to use the term conditional gating. The simple idea is to turn on a certain system during certain conditions and either: switching systems, or simply tracking the underlying series during alternate conditions. In this case the gating condition is regime, which in turn is, is High or Low Volatility as measured by the VIX. Although I'm not divulging the details of the underlying system itself, I've seen enough discussions in public domain to feel that other traders have picked up on the ideas demonstrated here.<br /><br /><br /><div class="separator" style="clear: bothjavascript:void(0); text-align: center;"><a href="http://3.bp.blogspot.com/_7YSZm5NIAmQ/TFsTNAupaQI/AAAAAAAAAQk/Q8zzwzFsJaw/s1600/term_wealth.jpg" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="267" src="http://3.bp.blogspot.com/_7YSZm5NIAmQ/TFsTNAupaQI/AAAAAAAAAQk/Q8zzwzFsJaw/s400/term_wealth.jpg" width="400" /></a></div><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /> Fig 1. Terminal Wealth vs. VIX threshold<br /><br />The animation below shows the system results during each step of the conditioning variable, the VIX. Notice the dramatic improvement at the value of 23. Also, notice as mentioned in earlier posts how the optimal switching point of 23 is the most robust value, since even if the OOS results are to the left or right of the optimal switching point, they will be the best local values over a wide range of dependency. The astute observer might have noticed that this system is simply tracking buy&hold during low vix regimes, while switching on system V, during the high regimes. It is evident that the terminal wealth simply tracks buy & hold after a certain value of VIX, since it is always locked on to tracking mode under a certain threshold.<br /><br />The system is only shown in sample, however, I've found it to be pretty successful OOS as well.<br /><br /><object height="344" width="425"><param name="movie" value="http://www.youtube.com/v/ZcSyV0mMcA0&hl=en&fs=1"></param><param name="allowFullScreen" value="true"></param><param name="allowscriptaccess" value="always"></param><embed src="http://www.youtube.com/v/ZcSyV0mMcA0&hl=en&fs=1" type="application/x-shockwave-flash" allowscriptaccess="always" allowfullscreen="true" width="425" height="344"></embed></object><br /><br />Video 1. Stepping the Equity Curve system through linear VIX range.Intelligent Tradinghttp://www.blogger.com/profile/17765336450326139518noreply@blogger.com10tag:blogger.com,1999:blog-107568321062020427.post-16391890574499342632010-08-02T23:25:00.000-07:002010-08-03T11:35:03.335-07:00Quantitative Candlestick Pattern Recognition (Part 2 -- What's this Natural Language Processing stuff?)<b></b><br /><br /><br />I wanted to briefly add one more thought regarding the temporal nature of probabilities as was alluded to in my correspondence with Adam, as well as the prior closing comments on the Chaos post (structure coalescing and dispersing).<br /><br />I will borrow from the field of Natural Language Processing and introduce one common visual description of how the states evolve over time using something called a Lexical Dispersion Plot.<br /><br /><a href="http://3.bp.blogspot.com/_7YSZm5NIAmQ/TFc5yBldJLI/AAAAAAAAAQc/wFl9hwC80VY/s1600/dsip_plot.jpg" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="176" src="http://3.bp.blogspot.com/_7YSZm5NIAmQ/TFc5yBldJLI/AAAAAAAAAQc/wFl9hwC80VY/s640/dsip_plot.jpg" width="640" /></a><br /><br /><br /><br /><br /><br /><br />Fig 1. Lexical (cluster state vocabulary) Dispersion Plot of Clustered Candlestick States over time<br /><br />In studies of language, we are often interested in observing how statistical patterns and relationships of sounds, characters, and words, evolve over time. Natural Language Processing is an entire field that has been dedicated to finding proper tools and vernacular to describe such statistics. The idea of using a lexical dispersion plot, is to observe how the lexicon itself evolves over time. To give a simple example, we might take a corpus of common pop culture texts borrowed from some library, and look at the occurrence of the following three word states; "spider", "man", and "spider man". The first two terms are isolated words, and the third term is called a bigram, which is a joint occurrence of two states in sequential order. <br /><br />Now, although I haven't created the proposed lexical dispersion plot for the above scenario, one could reasonably expect for the number of occurrences of the single words, spider and man, to be relatively frequent and uniform from about 1900 to say 1960, while the joint pair (spider,man), might occur relatively sparsely. However, beyond the 60s, we would notice an increase in the joint pair (spider,man) as the popularity of the fictional character began to grow in popularity in the collective pop consciousness. We might also expect a large frequency of the bigram to occur with recent popularity in the films. However, it's possible that a few hundred years later, that the joint term and character popularity might just wane and eventually die off, even though the two unimodal terms (spider and man) are still frequently observed.<br /><br />Ok, so what's the point to this? Well, we are commonly taught in statistics that there is a population that exists to describe the ultimate best statistical model of any observational set that lies somewhat beyond the notion of time (much like Plato's ideas of forms existing behind the scenes to describe all nature over all time, for philosophy fans).<br /><br />But one of the things that disturbed me earlier on is exactly what I described in the prior paragraph on the joint bigram of spider man, which is that sometimes we have to pragmatically shed some of our beliefs about 'ideal' populations and just try to observe statistical phenomena as it occurs temporally. As mentioned in the Chaos quote, some patterns just spontaneously occur (spider man) for a while, then disappear over time. So, that the notion of a larger population existing behind the scenes (and all the statistical rigor associated with it), might be either overkill or even misleading towards our goal of trying to capture the essence of fleeting patterns. From a statistical viewpoint, I suppose I would lean more on the side of the Bayesian inference camp (constantly updating beliefs online, rather than the frequentist approach).<br /><br />It's common knowledge in markets that financial time series are not IID (independent and independently distributed) over time. Rather, we accept that there are clusters of regions of behavior that tend to occur frequently together, and likewise, disappear over time (often reappearing again, though not always). This body of knowledge, specifically related to volatility, is sensibly labeled as heteroscedasticity (differing variance) as opposed to homoscedasticity (constant variance) of observations. We might also notice such behavior being binned and quantified into certain 'regimes' of local stability.<br /><br />Now, if any of the above meandering made any sense, I will describe how it relates to the Quantitative Candlestick Pattern Recognition article. Recall that using clustering, we were attempting to identify a vocabulary of states that best describe a limited set of features (in the example, six states were identified) that best partition related candlestick symbols by state in an unsupervised manner. However, the dispersion plot in Fig 1. shows that viewed from a perspective of a central population, these states are not uniformly distributed (IID) over time, rather, some tend to occur frequently over relatively long periods of time, while others appear and disappear for reasonable windows of time. States one and two in the set tend to occur rather frequently, because they are very small moves (dojis and such), which tend to occur often over time. However, some of the larger moves captured in states 3 and 4, tend to persist for some periods, then disappear over other intervals. The likely explanation, is that larger moves tend to be associated with volatility, which as we know, exhibits heteroscedasticity (clustering together in time). Keep in mind, the dispersion example is not only limited to single symbols over time, but can be extended to any number of n-gram pairs or symbols (such as the two word bigram state for spider man).<br /><br />With that knowledge in mind, it doesn't always make a whole lot of sense to try to develop and require a central fixed body of pattern statistics and related models over long periods of time, or even require many related statistical tests as neccessary (things like n-fold cross validation over very large time series, bootstrap re-sampling methods with shuffling, and requiring decades of backtesting training data to obtain confidence that we found the best pattern vocabulary to describe data for all time). For instance, in one of the better books on statistics for traders, "Evidence based TA," by Aronson, many of the tests were conducted using t-tests of entire bodies of financial series and rules over a long period of time, while rejecting many potential pockets of temporal success since they were thrown in and bootstrapped with much longer periods of data to draw conclusions about statistical significance of hypotheses related to better than chance success. <br /><br />This is not to say that common trading statistics should be thrown out; not at all. Instead, it is hopefully to try to look at how the information being evaluated is processed over time (for instance, we may look at long term statistics of trade results, but focus more on short term statistics and modelling of the underlying patterns they are dependent upon).<br /><br />Additionally, we might be interested in breaking up the pattern information stream into smaller segments and observing and adapting to how the segments of data streams evolve and change over time. The key savior or benefit to us, is that these patterns in the data streams do tend to persist together for quite some time (often reasonably long), before dispersing and moving on to new forms of patterns. There are several different machine learning concepts on the horizon that work with evolving (such as adding and pruning pattern model parameters) data streams over time and space. I have been spending some time evaluating one of them recently (although, I'm not saying which at the moment) which looks promising.Intelligent Tradinghttp://www.blogger.com/profile/17765336450326139518noreply@blogger.com12tag:blogger.com,1999:blog-107568321062020427.post-6629090351325775072010-07-06T09:59:00.000-07:002010-07-10T19:20:53.186-07:00Chaos in the Financial Markets?Over the years I've had quite a few interested individuals ask me about Chaos and its applications towards trading. Well, as hidden markov models and speech processing were made popular by James Simons and his team at Renaissance Technologies, one could trace much of the popularity of Chaos theory and its financial applications to Norman Packard and Doyne Farmer, two former physicists working in the area of complex systems. Much of their story is discussed in the book, 'the Predictors,' by Thomas Bass. The two were on the forefront of new research in areas of complexity and chaos and had decided to parlay their knowledge into applications towards the financial markets as they founded the company called Prediction Company in Santa Fe, New Mexico. The company was swallowed by UBS systems in 2005.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/_7YSZm5NIAmQ/TDNQ5E84hbI/AAAAAAAAAP0/GpQNHROkGTI/s1600/Lorenz.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="301" src="http://2.bp.blogspot.com/_7YSZm5NIAmQ/TDNQ5E84hbI/AAAAAAAAAP0/GpQNHROkGTI/s400/Lorenz.jpg" width="400" /></a></div>Fig 1. 2D slice of Lorenz Attractor Phase Space signature A.K.A. The butterfly effect.<br /><br />We could spend a lot of time discussing various facets of Chaos, as it is a very large field with many different related fields (such as fractals and complexity). But I want to focus on outlining a very simple understanding of why the field seemed so fascinating to traders and quants alike. Most experts in time series run a slew of tests to demonstrate that markets exhibit no predictable order.<br />However, what makes Chaos so fascinating is that certain time series may seem to pass a battery of common statistical tests for randomness, yet are perfectly deterministic.<br /><br />Chaos is a field of science that is engaged in studying non-linear dynamical behavior of systems. A popular example might be how different planetary bodies exhibit forces upon one another (see Poincare), or turbulent flow of various particles. Those engaged in studying such systems often like to observe signatures in a domain known as phase space or state space. Rather than look at the time series unfold over time, they are looking for a type of order that underlies the trajectories of the system state dynamics as it unfolds over time. The plot of the trajectory may show structural order that may appear random in the time domain. One of the most popular attractor signatures is the well known Lorenz attractor, which is better known to popular literature as the butterfly effect. Fig 1, displayed earlier, shows a 2d slice of the phase space trajectory, that you can run over at <a href="http://www.cmp.caltech.edu/%7Emcc/Chaos_Course/Lesson1/Demo8.html">chaos applet</a>. The famous signature displays a fascinating case of underlying order that relates to dynamic atmospheric convection in three dimensions.<br /><br />A more simple and applicable model that we will look at for illustration purposes, is the well known Logistic Map (also known as quadratic map and Fiegenbaum map). This equation of a non-linear dynamical trajectory was investigated and attributed to Robert May, a biologist studying models of fish populations.<br /><br />The recursive equation for the Logistic Map is: xnext = r*x(1-x)<br />Note that this is a feedback system which has some control over the dynamics of the system model by varying the value r. What you'll see if you plot it out vs. the control coefficient, r, is that the series moves from a stable system to one which bifurcates into periodic cycles; and as it approaches the value 4 it starts to behave chaotically. Chaotic behavior is aperiodic, which (like financial series) never repeats exactly; but (unlike financial series) has an underlying deterministic order. In order to see the beauty of chaos and how it exhibits determinism, let's first look at the time series.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/_7YSZm5NIAmQ/TDNRuddhXDI/AAAAAAAAAP8/ePZ5b2mxTOU/s1600/logistic_ts1.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="326" src="http://4.bp.blogspot.com/_7YSZm5NIAmQ/TDNRuddhXDI/AAAAAAAAAP8/ePZ5b2mxTOU/s400/logistic_ts1.jpg" width="400" /></a></div>Fig 2. Time Series of Logistic Map Equation.<br /><br />Notice the 1st plot, which shows the 1st differenced time series, shows no signs of periodicity or determinism, nor does the cumulative walk display on the 2nd. However, if we look at the phase space signature of the same series in phase space, we see a completely different picture.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/_7YSZm5NIAmQ/TDNSPG1ZZHI/AAAAAAAAAQE/j685M13Ywow/s1600/logistic_map.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="372" src="http://3.bp.blogspot.com/_7YSZm5NIAmQ/TDNSPG1ZZHI/AAAAAAAAAQE/j685M13Ywow/s400/logistic_map.jpg" width="400" /></a></div>Fig 3. Phase State Plot of Logistic Map<br /><br />Notice it is clearly deterministic in this figure. I.e. given any point on the x axis, we can easily determine the exact corresponding point one step into the future on the y axis. This should be fairly obvious, since the equation we started out with xnext=r*x(1-x) = r*x-r*x^2 is a negative parabolic curve. However, many such time series do not have such a simple equation and must be tested in various ways to determine structural chaos. There are other issues to contend with as well, including sensitivity to initial conditions and divergent trajectories due to finite computational precision.<br /><br />Ok, now that we understand all the hoopla about Chaos and a seemingly random signal having an underlying deterministic signature, what about a common financial time series?<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/_7YSZm5NIAmQ/TDNTXQ0dh9I/AAAAAAAAAQM/IJJOmOwItvQ/s1600/rw1.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="327" src="http://4.bp.blogspot.com/_7YSZm5NIAmQ/TDNTXQ0dh9I/AAAAAAAAAQM/IJJOmOwItvQ/s400/rw1.jpg" width="400" /></a></div>Fig 4. Typical Random Walk (Financial) Time Series.<br /><br />The Random Walk shows no discernible order, nor periodicity; similar to the logistic equation series. But what about if we observe the phase space trajectory?<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/_7YSZm5NIAmQ/TDNT4PMPcHI/AAAAAAAAAQU/jCAf8pwc5Gs/s1600/rw_map.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="372" src="http://1.bp.blogspot.com/_7YSZm5NIAmQ/TDNT4PMPcHI/AAAAAAAAAQU/jCAf8pwc5Gs/s400/rw_map.jpg" width="400" /></a></div>Fig 5. Phase Space trajectory of random walk.<br /><br />No Dice. Notice that the random walk shows zero determinism, hence the gaussian nature. There are numerous methods to display higher order dimensional metrics as well (correlation lag plots, Lyapunav exponents, etc), and other than something called the compass rose, I've not personally seen much evidence of deterministic chaos in raw financial time series. Incidentally, the phase plot here is equivalent to a lag one scatterplot of returns for those more familiar with finance related statistics.<br /><br />A last note on the Prediction Company, is that there is an often referenced paper on the mackey-glass equation <br />(a non linear dynamic model of blood flow) by Meyer and Packard, whereby they used genetic algorithms to find underlying conditional order rule sets for the series. <br /><br />I'll end with perhaps the best excerpt from 'the Predictors,' which echoes much of my own focus and discoveries over the last decade...<br /><br />"One of the fundamental truths about the markets is that the dynamics are nonstationary," Norman explains. "We see no evidence for the existence of an attractor with stable statistical properties. This is what characterizes chaos -- having an attractor with stable statistical properties-- so what we are seeing is not chaos. It is something else. Call it an 'even-stranger-than-strange attractor,' which may not really be an attractor at all.<br /><br />The market might enter an epoch where some structure coalesces and sits there in a statistically stationary pattern, but then invariably it disappears. You have clouds of structure that coalesce and evaporate, coalesce and evaporate. Prediction Company's job is to find those pieces of structure that have the strongest signal and persist the longest. We want to know when the structure is beginning to emerge or dissolve because, once it begins to dissolve, we want to stop betting on it."<br /><br />...excerpt from the Predictors, Thomas A. Bass (1999).Intelligent Tradinghttp://www.blogger.com/profile/17765336450326139518noreply@blogger.com1tag:blogger.com,1999:blog-107568321062020427.post-1899062915120293332010-06-10T15:24:00.000-07:002011-05-22T22:31:55.537-07:00Quantitative Candlestick Pattern Recognition (HMM, Baum Welch, and all that)<div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/_7YSZm5NIAmQ/TBHXT4ptMZI/AAAAAAAAAPs/30AkoNSsqB0/s1600/cluster3d.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="347" src="http://4.bp.blogspot.com/_7YSZm5NIAmQ/TBHXT4ptMZI/AAAAAAAAAPs/30AkoNSsqB0/s400/cluster3d.jpg" width="400" /></a></div><br />Fig 1. Clustering based approach to candlestick Pattern Recognition. <br /><br />I've been reading a book titled, 'the Quants,' that I'm sure will tantalize many traders with some of the ideas embedded within. Most notably (IMO), the notion that Renaissance's James Simons, hired a battery of cryptographers and speech recognition experts to decipher the code of the markets. Most notable of the hired experts was, leonard Baum, co-developer of the Baum-Welch algorithm; an algorithm used to model hidden markov models. Now while I don't plan to divulge full details of my own work in this area; I do want to give a brief example of some methods to possibly apply with respect to these ideas.<br /><br />Now most practitioners of classical TA have built up an enormous amount of literature around candlesticks, the japanese symbols used to denote a symbolic formation around open, high, low, and close daily data. The problem as I see it, is that most of the literature that is available only deals with qualitative recognition of patterns, rather than quantitative.<br /><br />We might want to utilize a more quantitative approach to analyzing the information, as it holds much more potential than single closing price data (i.e. the information in each candle contains four dimensions of information). The question is, how do we do this in a quantitative manner?<br /><br /><div class="separator" style="clear: both; text-align: center;"></div>One method to recognizing patterns that is well known is the supervised method. In supervised learning, we feed the learner a correct list of responses to learn and configure itself from; over many iterations, it comes up with the most optimal configuration to minimize errors between the data it learns to identify, and the data we feed as examples. For instance, we might look at a set of hand-written characters and build a black box to recognize each letter by training via a neural net, support vector machine, or other supervised learning device. However, you probably don't want to spend hours classifying the types of candlesticks by name. Those familiar with candlesticks, might recognize numerous different symbols; shooting star, hammer, doji, etc... Each connotating a unique symbolic harbinger of the future to come. From a quantitative perspective, we might be more interested in understanding the bayesian perspective; I.e. P(upday|hammer)=P(upday,hammer)/P(hammer), for instance.<br /><br />But how could we learn the corpus of symbols, without the tedious method of identifying each individual symbol by hand? This is a problem that may better be approached by unsupervised learning. In unsupervised learning, we don't need to train a learner; it finds relationships by itself. Typically, the relationships are established as a function of distance between exemplars. Please see the data mining text (Witten/Frank) in my recommended list in order to examine the concepts in more detail. In this case I am going to train using a very common unsupervised learner called k-means clustering.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/_7YSZm5NIAmQ/TBFgDg82FWI/AAAAAAAAAOc/_Xd3BAs2fl8/s1600/Qoriginal.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="210" src="http://4.bp.blogspot.com/_7YSZm5NIAmQ/TBFgDg82FWI/AAAAAAAAAOc/_Xd3BAs2fl8/s640/Qoriginal.jpg" width="640" /></a></div> Fig 2. Graph of arbitrary window of QQQQ data<br /><br />Notice the common time ordered candlestick form of plotting is displayed in Fig 2. Now using k-means clustering, with a goal of identifying 6 clusters, I tried to automatically learn 6 unique candlestick forms based on H,L,Cl data relative to Open in this example. The idea being that similar candlestick archetypes will tend to cluster by distance.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/_7YSZm5NIAmQ/TBFhbH7BegI/AAAAAAAAAOk/NqTGxB-X8M0/s1600/Qcluster1.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="209" src="http://1.bp.blogspot.com/_7YSZm5NIAmQ/TBFhbH7BegI/AAAAAAAAAOk/NqTGxB-X8M0/s640/Qcluster1.jpg" width="640" /></a></div>Fig 3. Candlestick symbols sorted by 6 Clusters<br /><br />Notice in Figure 3, that we can clearly see the k-means clustering approach automatically recognized large red bodies, green bodies, and even more interestingly, there are a preponderance of hammers that were automatically recognized in cluster number 5.<br /><br />So given that we have identified a corpus of 6 symbols in our language, of what use might this be? Well, we can take and run a cross tabulation of our symbol states using a program like R.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/_7YSZm5NIAmQ/TBFigk63t8I/AAAAAAAAAOs/W-LlDaJjrDo/s1600/table1.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/_7YSZm5NIAmQ/TBFigk63t8I/AAAAAAAAAOs/W-LlDaJjrDo/s320/table1.jpg" /></a></div>Fig 4. Cross Tabulation of Clustered States.<br /><br />One of the things that strikes me right away is that there are an overwhelming number of pairs with state 1s following state 5; Notice the 57% frequency trounces all other dependent states. Now what is interesting about this? Remember we established that state 5 corresponds to a hammer candlestick? Well, common intuition (at least from my years of reading) expects that a hammer is a turning point that is followed by an up move. Yet, in our table we see it is overwhelmingly followed by state 1, which if you look back, at the sorted by cluster diagram, is a very big red down candlestick. This is completely opposite to what our common body of knowledge and intuition tells us.<br /><br />In case it might seem unbelievable to fathom, we can resort the data again, this time in original time order, but with clusters identified.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/_7YSZm5NIAmQ/TBFzCPjsHTI/AAAAAAAAAPU/pKXwsXpsZ0o/s1600/Qstate5rev1.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="234" src="http://4.bp.blogspot.com/_7YSZm5NIAmQ/TBFzCPjsHTI/AAAAAAAAAPU/pKXwsXpsZ0o/s640/Qstate5rev1.jpg" width="640" /></a></div><br /><div class="separator" style="clear: both; text-align: center;"></div><br />Fig 5. Visual Inspection of hammer (state5) likely followed by down candle (state 1)<br /><br />We can go back and resort the data and identify the states via the resorted cluster ID staircase level(5), or use labels to more simply identify the case of hammer(5) and its following symbol. Notice that contrary to common knowledge, our automatic recognition process and tabulated probability matrix, found good corroboration with our visual inspection. In the simple window sample (resized to improve visibility), 4 of the 5 instances of the hammer (state 5) were followed by a big red down candle (state 1). Now one other comment to make is that in case the state 5 is not followed by state 1 (say, we bet on expecting a down move), it has a 14.3% chance of landing in state 6 on the next move, which brings our likelihood of a decent sized down move to 71.4% overall.<br /><br />We can take these simple quantitative ideas and extend them to MCMC dynamic models, Baum Welch and Viterbi algorithms, and all that sophisticated stuff. Perhaps one day even mimicking the mighty Renaissance itself? I don't know, but any edge we can add to our arsenal will surely help.<br /><br />Take some time to read the Quants, if you want a great laymen's view of many related quant approaches.<br /><br /><iframe class=" krxnqligpsrrywdbmlmo krxnqligpsrrywdbmlmo krxnqligpsrrywdbmlmo krxnqligpsrrywdbmlmo krxnqligpsrrywdbmlmo krxnqligpsrrywdbmlmo krxnqligpsrrywdbmlmo krxnqligpsrrywdbmlmo kwtefmcaqzsjqmgxfbbq dvowwutgnxtgpvuwzicu aqchpnzqofewckihskog aqchpnzqofewckihskog aqchpnzqofewckihskog aqchpnzqofewckihskog" frameborder="0" marginheight="0" marginwidth="0" scrolling="no" src="http://rcm.amazon.com/e/cm?lt1=_blank&bc1=000000&IS2=1&bg1=FFFFFF&fc1=000000&lc1=0000FF&t=ntelligenttra-20&o=1&p=8&l=as1&m=amazon&f=ifr&md=10FE9736YVPPT7A0FBG2&asins=0307453375" style="height: 240px; width: 120px;"></iframe><br /><br />There may be some bugs in this post, as google just seemed to update their editing platform, and I'm trying to iron out some of the kinks.Intelligent Tradinghttp://www.blogger.com/profile/17765336450326139518noreply@blogger.com29tag:blogger.com,1999:blog-107568321062020427.post-10421163357299437242010-05-25T10:41:00.001-07:002011-04-29T15:06:49.071-07:00The Kalman Filter For Financial Time SeriesEvery now and then I come across a tool that is so bogged down in pages of esoteric mathematical calculations, it becomes difficult to get even a simple grasp of how or why they might be useful. Even worse, you exhaustively search the internet to find a simple picture that might express a thousand equations, but find nothing. The kalman filter is one of those tools. Extremely useful, yet, very difficult to understand conceptually because of the complex mathematical jargon. Below is a simple plot of a kalman filtered version of a random walk (for now, we will use that as an estimate of a financial time series).<br /><br /><a href="http://2.bp.blogspot.com/_7YSZm5NIAmQ/S_wNkdp6ClI/AAAAAAAAAOU/hEYy4fWR9Rg/s1600/rw_plot.jpg" onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}"><img alt="" border="0" id="BLOGGER_PHOTO_ID_5475266167062530642" src="http://2.bp.blogspot.com/_7YSZm5NIAmQ/S_wNkdp6ClI/AAAAAAAAAOU/hEYy4fWR9Rg/s400/rw_plot.jpg" style="cursor: pointer; display: block; height: 250px; margin: 0px auto 10px; text-align: center; width: 400px;" /></a><br /><br />Fig 1. Kalman Filter estimates of mean and covariance of Random Walk<br /><br />The kf is a fantastic example of an adaptive model, more specifically, a dynamic linear model, that is able to adapt to an ever changing environment. Unlike a simple moving average or FIR that has a fixed set of windowing parameters, the kalman filter constantly updates the information to produce adaptive filtering on the fly. Although there are a few TA based adaptive filters, such as Kaufman Adaptive Moving Average and variations of the exponential moving average; neither captures the optimal estimation of the series in the way that the KF does. In the plot in Fig 1. We have a blue line which represents the estimated dynamic 'average' of the underlying time series, where the red line represents the time series itself, and lastly, the dotted lines represent a scaled covariance estimate of the time series against the estimated average. Notice that unlike many other filters, the estimated average is a very good measure of the 'true' moving center of the time series.<br /><br />Without diving into too much math, the following is the well known 'state space equation' of the kf:<br />xt=A*xt-1 + w<br />zt=H*xt + v<br /><br />Although these equations are often expressed in state space or matrix representation, making them somewhat complicated to the layman, if you are familiar with simple linear regression it might make more sense.<br />Let's define the variables:<br />xt is the hidden variable that is estimated, in this case it represents the best estimate of the dynamic mean or dynamic center of the time series<br />A is the state transition matrix, or I often think of it as similar to the autoregressive coefficient in an AR model; think of it as Beta in a linear regression here.<br />w is the noise of the model.<br /><br />So, we can think of the equation of x=Ax-1 + w as being very similar to the basic linear regression model, which it is. The main difference being that the kf constantly updates the estimates at each iteration in an online fashion. Those familiar with control systems might understand it as a feedback mechanism, that adjusts for error. Since we can not actually 'see' the true dynamic center in the future, only estimate it, we think of x as a 'hidden' variable.<br /><br />The other equation is linked directly to the first.<br />zt=H*xt+v<br />zt is the measured noisy state variable that has a probabilistic relationship to x.<br />xt we recognize as the estimate of the dynamic center of the time series.<br />v is the noise of the model.<br /><br />Again, it is a linear model, but this time the equation contains something we can observe: zt is the value of the time series we are trying to capture and model with respect to xt. More specifically, it is an estimate of the covariance, or co-movement between the observed variable, the time series value, and the estimate of the dynamic variable x. You can also think of the scaled envelope it creates as similar to a standard deviation band that predicts the future variance of the signal with respect to x.<br /><br />Those familiar with hidden markov models, might recognize the concept of hidden and observed state variables displayed here.<br /><br />Basically, we start out estimating our guess of the the average and covariance of the hidden series based upon measurements of the observable series, which in this case are simply the normal parameters N(mean, std) used to generate the random walk. From there, the linear matrix equations are used to estimate the values of cov x and x, using linear matrix operations. The key is that once an estimate is made, the value of the covariance of x is then checked against the actual observable time series value, y, and a parameter called K is adjusted to update the prior estimates. Each time K is updated, the value of the estimate of x is updated via:<br />xt_new_est=xt_est + K*(zt - H*x_est). The value of K generally converges to a stable value, when the underlying series is truly gaussian (as seen in fig 1. during the start of the series, it learns). After a few iterations, the optimal value of K is pretty stable, so the model has learned or adapted to the underlying series. <br /><br />Some advantages to the kalman filter are that is is predictive and adaptive, as it looks forward with an estimate of the covariance and mean of the time series one step into the future and unlike a Neural Network, it does NOT require stationary data.<br />Those working on the Neural Network tutorials, hopefully see a big advantage here.<br /><br />It has a very close to smooth representation of the series, while not requiring peeking into the future.<br /><br />Disadvantages are that the filter model assumes linear dependencies, and is based upon noise terms that are gaussian generated. As we know, financial markets are not exactly gaussian, since they tend to have fat tails more often than we would expect, non-normal higher moments, and the series exhibit heteroskedasticity clustering. Another more advanced filter that addresses these issues is the particle filter, which uses sampling methods to generate the underlying distribution parameters.<br /><br />--------------------------------------------------------------------------------<br />Here are some references which may further help in understanding of the kalman filter.<br />In addition, there is a kalman smoother in the R package, DLM.<br /><br />http://www.swarthmore.edu/NatSci/echeeve1/Ref/Kalman/ScalarKalman.html<br /><br />If you are interested in a Python based approach, I highly recommend the following book...Machine Learning An Algorithmic Perspective<br /><br /><iframe frameborder="0" marginheight="0" marginwidth="0" scrolling="no" src="http://rcm.amazon.com/e/cm?lt1=_blank&bc1=000000&IS2=1&bg1=FFFFFF&fc1=000000&lc1=0000FF&t=ntelligenttra-20&o=1&p=8&l=as1&m=amazon&f=ifr&md=10FE9736YVPPT7A0FBG2&asins=1420067184" style="height: 240px; width: 120px;"></iframe><br /><br /><br />Not only is there a fantastic writeup on hidden markov models and kalman filters, but there is real code you can replicate. It is one of the best practical books on Machine Learning I have come across-- period.Intelligent Tradinghttp://www.blogger.com/profile/17765336450326139518noreply@blogger.com49tag:blogger.com,1999:blog-107568321062020427.post-35006106015135584812010-05-12T22:04:00.000-07:002010-05-13T00:03:24.070-07:00Is it possible to get a causal smoothed filter ?Although I haven't been all that much of a fan of moving average based methods, I've observed some discussions and made some attempts to determine if it's possible to get an actual smoothed filter with a causal model. Anyone who's worked on financial time series filters knows that the bane of filtering is getting a smooth response with very low delay. Ironically, one would think that you need a very small moving average length to accomplish a causal filter with decent lag properties; often a sacrifice is made between choosing a large parameter to obtain decent smoothing at the cost of lag.<br /><br />I put together the following FIR based filter using QQQQ daily data for about 1 year worth of data. It is completely causal and described by .. gasp.. 250 coefficients.<br /><br />Does it appear smooth? You decide.<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://1.bp.blogspot.com/_7YSZm5NIAmQ/S-uJTii9Y4I/AAAAAAAAAOE/jpaCzr4Z5vo/s1600/causal1.jpg"><img style="display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;width: 400px; height: 271px;" src="http://1.bp.blogspot.com/_7YSZm5NIAmQ/S-uJTii9Y4I/AAAAAAAAAOE/jpaCzr4Z5vo/s400/causal1.jpg" border="0" alt=""id="BLOGGER_PHOTO_ID_5470617141155554178" /></a><br /><br />Fig 1. FIR 250 tap feed forward filter<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://4.bp.blogspot.com/_7YSZm5NIAmQ/S-uJfRBhm3I/AAAAAAAAAOM/p34jPC2Otgk/s1600/impulse.jpg"><img style="display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;width: 400px; height: 246px;" src="http://4.bp.blogspot.com/_7YSZm5NIAmQ/S-uJfRBhm3I/AAAAAAAAAOM/p34jPC2Otgk/s400/impulse.jpg" border="0" alt=""id="BLOGGER_PHOTO_ID_5470617342610348914" /></a><br /><br />Fig 2. 250 weight impulse response determining coefficients<br /><br />The impulse response is approximately a sinc function, which is the discrete inverse transform for an ideal 'brick wall' low pass filter.<br /><br />I haven't actually verified much out of sample at the moment, so it's quite possible that the model may not fare as well; it remains to be investigated. However, thought I would share this work to give some ideas about potential of causal filtering methods.Intelligent Tradinghttp://www.blogger.com/profile/17765336450326139518noreply@blogger.com9