When applying cross validation, it is necessary to determine an appropriate size for folds of data to split the data set accordingly. An empirical way to do this is to try different fold sizes and check when the error reaches a minimum (or some other criterion). This is what the latest matlab script does.
It requires the function splitCV from the last post. The fold size is varied from almost the full data set (no cv) down to 1, somewhat exponentially since that was easy to implement. The above script is, of course, highly experimental and contains loads of comments and is far from perfect.
One of the obtained plots might look as follows:
One might still choose to reduce the fold size to 1 as it generates the smallest mean/stddev (left in graphics). However, mean and stddev are not far off the respective values at a fold size of around 90. Furthermore, training with the smallest possible fold size requires orders of magnitude more computation time and might lead to overlearning of the neural network. Future steps will therefore quite likely be taken with a cross validation fold size of about 1/10th of the data.
EDIT 2007-11-20: Changed „block size“ to „fold size“ as that is easier to understand. Thanks to Sandro Saitta.