The Predictive Power of High-Performance Computing in Finance

Due to the bigger parameter set of the second learner, when the algorithms encounter out-of-sample data that is dissimilar from training and validation sets, potential differences between the two learners can arise.

By reverse logic, whenever we use a supervised learner with a parameter set that is not minimised for the task at hand, we are consciously accepting out-of-sample errors.

For any application relying on financial market data, data can generally be considered scarce. Hence, this last observation is of enormous importance in finance. Put differently: unless we are minimising the parameters in our supervised learner to the absolute minimum required to achieve the desired training and validation performance, we are creating out-of-sample errors.

For finite data sets, we therefore need to adapt our verification procedure to find a parameter-minimised algorithm which has our desired calibration and validation performances. Unfortunately, this can be a computationally very expensive task since a very large number of different algorithms needs to be tried. The benefit of course is an improvement in out-of-sample performance.

Read the full article, by Jan Witte, on Verne Global.

Leave a Reply

Your email address will not be published. Required fields are marked *