Following is an example of what it looks like to predict an actual univariate price series. The period of the signal that was sampled was already in stationary form, so not much massaging was needed other than normalization (described earlier).
What's important to notice when you see these kinds of neural network predictions (particularly in marketing snapshots for software vendors or trading book examples) is that they look fantastic out of sample from a bird's eye view. Unfortunately, the devil is always in the details. If you zoom way in, the predictions are not as accurate as the larger picture portrays. A more accurate method to asses how well the prediction performed is to look at the percentage change of each predicted value. We can simply compare the sign of the actual percentage change to the predicted change. In this case, the out of sample test results had a 43% hit rate, which is worse than a naive predictor would predict. The good news is you can flip those results, and just predict the opposite direction to get a 57% hit rate. However, you always have to be careful to do due diligence to verify the robustness of these types of predictions over many conditions. Another thing to be careful about is that hit rate only gives you number of correct predictions, but tells you nothing about the magnitude of the predictions, which are important to have a positive net expectation. The type of result you see here, however, is common for predicting specific univariate time series data values.
Fig 1. Stock Prediction with out of sample region highlighted
You now have a practical example to get you started with building your own prediction system with free tools (except excel, which you likely have), and some ideas and methods to build your own prediction system. Any professional software you purchase will not differ much other than using different attributes to train on or modifying the internal architecture of the neural network. I have not shown more detailed examples on advanced techniques, but might incorporate some later if there is demand.
12 hours ago
Would you flip the results to use for real trading, or would you look for a more positive initial results?
ReplyDeleteHi,
ReplyDeleteIdeally, we often look for results that make sense. The less sense it makes (for instance, mining factors that have no cause and effect relationship), the less robust you would feel about it.
That being said, reversing strategies aren't always a bad strategy either. In this particular case, however, if we ran statistical tests, it would be more likely to be attributable to chance, since as I explained, with non-smoothed learning, the network is more likely to be doing the best it can to track random noise (ideosyncratic) components.
In this blog you are using freeware data-mining tools which have no actual trading interfaces, how would you actually go about interfacing a model to a broker?
ReplyDeleteHi Craig,
ReplyDeleteMy focus is to allow users to be able to actually build and experience many of these systems without going into trading interfaces, as the complexity is beyond the current scope of the tutorials.
It is possible to simply design a system and monitor it, without having a dedicated (API) interface. But I may expand on other languages and potential applications in the future.
Thanks for the suggestion.
How much more difficult/different would it be to build a NN using nominal inputs and targets?
ReplyDeleteThanks
bozwood,
ReplyDeleteIt is essentially the identical procedure. The only difference is your input stimulus file will have nominal inputs and targets. You can even mix them, as the classifier example with stocks/bonds mixed nominal and numeric attributes could have used a neural net to train.
You will find are many dualities between learning schemes. For instance, regression and classification are like two sides of a coin, mainly performing different tasks-- One fits, while the other separates, but the internal logic is very similar.
Your last sentence makes a lot of sense and it makes it easier to conceptualize.
ReplyDeleteHow would you handle sparse target data? I have seen some solutions such as replicating the lines of sparse outputs so that the # equals the non-sparse outputs (probably not describing correctly), but I am not sure if that is correct or not. If it's too much to answer here, maybe you could incorporate that into an example down the road?
Thanks
bozwood,
ReplyDeleteThere are methods to deal with sparse data; particularly, using something called bootstrapping, or in Weka they call it a re-sample filter, which is a pre-processing option to generate additional exemplars based on statistical similarities.
I would have to double check, but off hand, I believe the default for training validation methods uses stratification, which is a method that attempts to assure an equal amount of data classes are represented in each training set, so it doesn't over-train on an example with an abnormal number of one sided attributes.
That being said, I've found that there is typically sufficient market data to train on (especially at higher sample frequencies), and I am cautious about using bootstrapping as it makes some statistical assumptions about IID/and normality in the underlying data process.
thanks for the great questions.
Great blog regarding the best techniques for trading in market and predicting the Best SGX stock picks in Singapore stock market.
ReplyDelete