JingTao YAO
Dept of Information Systems
Massey UniversityPrivate Bag 11222Palmerston NorthNew Zealand
J.T.Yao@massey.ac.nzAbstract
Neural networks are good at classification,forecasting and recognition. They are also goodcandidates of financial forecasting tools. Forecastingis often used in the decision making process. Neuralnetwork training is an art. Trading based on neuralnetwork outputs, or trading strategy is also an art. Wewill discuss a seven-step neural network forecastingmodel building approach in this article. Pre and postdata processing/analysis skills, data sampling, trainingcriteria and model recommendation will also becovered in this article.
1. Forecasting with Neural Networks
Forecasting is a process that produces a set ofoutputs by given a set of variables. The variables arenormally historical data. Basically, forecastingassumes that future occurrences are based, at least inpart, on presently observable or past events. It assumesthat some aspects of the past patterns will continue intothe future. Past relationships can then be discoveredthrough study and observation. The basic idea offorecasting is to find an approximation of mappingbetween the input and output data in order to discoverthe implicit rules governing the observed movements.For instance, the forecasting of stock prices can bedescribed in this way. Assume that ui representstoday's price, vi represents the price after ten days. Ifthe prediction of a stock price after ten days could beobtained using today's stock price, then there should bea functional mapping ui to vi, where vi=Γi(ui).Using all (ui,
vi) pairs of historical data, a general
function Γ() which consists of Γi()H
could be obtained,
that is v=Γ(u). More generally, u which consists ofmore information in today's price could be used infunction Γ(). As NNs are universal approximators, wecan find a NN simulating this Γ()function. The trainednetwork is then used to predict the movements for thefuture.
Chew Lim TAN
Dept of Computer ScienceNational University of Singapore
1 Science Drive 2Singapore 117543
Singapore
tancl@comp.nus.edu.sgNN based financial forecasting has been explored forabout a decade. Many research papers are published onvarious international journals and conferencesproceedings. Some companies and institutions are alsoclaiming or marketing the so called advancedforecasting tools or models. Some research results offinancial forecasting found in references. For instance,stock trading system[4], stock forecasting [6, 22],foreign exchange rates forecasting [15, 24], optionprices [25], advertising and sales volumes [13].However, Callen et al. [3] claim that NN models arenot necessarily superior to linear time series modelseven when the data are financial, seasonal and non-linear.
2. Towards a Better Robust FinancialForecasting Model
In working towards a more robust financialforecasting model, the following issues are worthexamining.
First, instead of emphasizing on the forecastingaccuracy only, other financial criteria should beconsidered. Current researchers tend to use goodnessof fit or similar criteria to judge or train their models infinancial domain. In terms of mathematical calculationthis approach is a correct way in theory. As weunderstand that a perfect forecasting is impossible inreality. No model can achieve such an ideal goal.Under this constraint, seeking a perfect forecasting isnot our aim. We can only try to optimize our imperfectforecasts and use other yardsticks to give the mostrealistic measure.
Second, there should be adequate organization andprocessing of forecasting data. Preprocessing andproper sampling of input data can have impact on theforecasting performance. Choice of indicators as inputsthrough sensitivity analysis could help to eliminateredundant inputs. Furthermore, NN forecasting resultsshould be used wisely and effectively. For example, asthe forecast is not perfect, should we compare the NNoutput with the previous forecast or with the real dataespecially when price levels are used as the forecastingtargets?
Third, a trading system should be used to decide onthe best tool to use. NN is not the single tool that canbe used for financial forecasting. We also cannot claimthat it is the best forecasting tool. In fact, people arestill not aware of which kind of time series is the mostsuitable for NN applications. To conduct postforecasting analysis will allow us to find out thesuitability of models and series. We may then concludethat a certain kind of models should be used for acertain kind of time series. Training or building NNmodels is a trial and error procedure. Some researchersare not willing to test more on their data set [14]. Ifthere is a system that can help us to formalize thesetedious exploratory procedures, it will certainly be ofgreat value to financial forecasting.
Instead of just presenting one successful experiment,possibility or confidence level can be applied to theoutputs. Data are partitioned into several sets to findout the particular knowledge of this time series. Asstated by David Wolpert and William Macready abouttheir No-Free-Lunch theorems [28], averaged over allproblems, all search algorithms perform equally. Justexperimenting on a single data set, a NN model whichoutperforms other models can be found. However, foranother data set one model which outperforms NNmodel can also be found according to No-Free-Lunchtheorems. To avoid such a case of one modeloutperforming others, we partition the data set intoseveral sub data sets. The recommended NN modelsare those that outperform other models for all sub timehorizons. In other words, only those modelsincorporated with enough local knowledge can be usedfor future forecasting.
It is very important and necessary to emphasizethese three issues here. Different criteria exist for theacademics and the industry. In academics, sometimepeople seek for the accuracy towards 100%. While inindustry a guaranteed 60% accuracy is typically aimedfor. In addition, profit is the eventual goal ofpractitioners, so a profit oriented forecasting modelmay fit their needs.
Cohen [5] surveyed 150 papers in the proceedingsof the 8th National Conference on artificialintelligence. He discovered that only 42% of the papersreported that a program had run on more than oneexample; just 30% demonstrated performance in someway; a mere 21% framed hypotheses or madepredictions. He then concluded that the methodologiesused were incomplete with respect to the goals ofdesigning and analyzing AI system.
Tichy [20] showed that in a very large study of over400 research articles in computer science. Over 40% ofthe articles are about new designs and the modelscompletely lack experimental data. In a recent IEEEcomputer journal, he also points out 16 excuses toavoid experimentation for computer scientists [21].What he is talking is true and not a joke.
Prechelt [14] showed that the situation is not better inthe NN literature. Out of 190 papers published in well-known journals dedicated to NNs, 29% did not employeven a single realistic or real learning problem. Only8% of the articles presented results for more than oneproblem using real world data.
To build a NN forecasting we need sufficientexperiments. To test only for one market or just for oneparticular time period means nothing. It will not lead toa robust model based on manually, trial-and-error, orad hoc experiments. More robust model is needed butnot only in one market or for one time period. Becauseof the lack of industrial models and because failures inacademic research are not published, a single person oreven a group of researchers will not gain enoughinformation or experiences to build up a goodforecasting model. It is obvious that an automatedsystem dealing with NN models building is necessary.
3. Steps of NN Forecasting: The Art of NNTraining
As NN training is an art, many searchers andpractitioners have worked in the field to work towardssuccessful prediction and classification. For instance,William Remus and Marcus O'connor [16] suggestsome principles for NN prediction and classificationare of critical importance in the chapter, “Principles ofForecasting” in “A Handbook for Researchers andPractitioners”:
• Clean the data prior to estimating the NN model.• Scale and deseasonalize data prior to estimating
the model.
• Use appropriate methods to choose the right starting
point.
• Use specialized methods to avoid local optima.
• Expand the network until there is no significant
improvement in fit.
• Use pruning techniques when estimating NNs and
use holdout samples when evaluating NNs.
• Take care to obtain software that has in-built features
to avoid NN disadvantages.
• Build plausible NNs to gain model acceptance by
reducing their size.
• Use more approaches to ensure that the NN model is
valid.
With the authors' experience and sharing from otherresearchers and practitioners, we propose a seven-stepapproach for NN financial forecasting model building.The seven steps are basic components of the automatedsystem and normally involved in the manual approach.Each step deals with an important issue. They are datapreprocessing, input and output selection, sensitiveanalysis, data organization, model construction, postanalysis and model recommendation.Step 1. Data Preprocessing
A general format of data is prepared. Depending onthe requirement, longer term data, e.g. weekly, monthlydata may also be calculated from more frequentlysampled time series. We may think that it makes senseto use as frequent data sampling as possible forexperiments. However, researchers have found thatincreasing observation frequency does not always helpto improve the accuracy of forecasting [28].
Inspection of data to find outliers is also important asoutliers make it difficult for NNs and other forecastingmodels to model the true underlying functional. AlthoughNNs have been shown to be universal approximators, ithad been found that NNs had difficulty modelingseasonal patterns in time series [11].When a time seriescontains significant seasonality, the data need to bedeseasonalized.
Before the data is analyzed, basic preprocessing ofdata is needed. In the case of days with no trading at allexist, the missing data need to be fill up manually.Heinkel and Kraus [9] stated that there are threepossible ways dealing with days with no trading:1)Ignore the days with no trading and use data for tradingdays. 2) Assign a zero value for the days which are notrading. 3) Build a linear model which can be used toestimate the data value for the day with no trading. Inmost cases, the horizontal axis is marked by the marketday instead of (or in addition to) the calendar date.Suppose we are forecasting the price of next timepoint. If it is on a Monday, the next time point istomorrow or Tuesday. If it is on a Friday, the next daywill be next Monday in fact it is two days later.
In most of times, weekly closing price refers to eachFridays closing prices. In the event of Friday being aholiday, the most recently available closing price forthe stock was used. Some researchers also pick any dayas weekly prices. Normalization is also conducted inthis phase. The purpose of normalization is to modifythe output levels to a reasonable value. Without suchtransformation, the value of the output may be toolarge for the network to handle, especially whenseveral layers of nodes in the NN are involved. Atransformation can occur at the output of each node, orit can be performed at the final output of the network.Original values, Y, along with the maximum andminimum values in the input file, is
later entered into the equation below to scale the datato the range of [-1,+1].
Nm=
2*Y−(Max+Min)Max−Min
Step 2. Selection of Input & Output variables
Select inputs from available information. Inputs andtargets also need to be carefully selected. Traditionally,only changes are processed to predict targets as thereturn or changes are the main concerns of fundmanagers. Three types of changes have been used inprevious research: xt−xt−1, logxt−logxt−1 and
xt−xt−1x. In addition, pure time series forecastingt−1
techniques require a stationary time series while mostraw financial time series are not stationary. Herestationarity refers to a stochastic process whose mean,variances and covariance (first and second ordermoments) do not change with time. xt−xt−1 isthought to be unit dependent, and hence comparisonsbetween series are difficult and are less used in theliterature. However, after NNs have been introduced,we can use the original time series as our forecastingtargets. We can let the networks to determine the unitsor patterns from the time series. In fact, the traditionalreturns are not the exact returns in real life. Theinflation is not taken into account at least. Thesereturns are named as nominal returns ignoring inflationas the inflation cannot be calculated so sensibly fromdaily series. After the aim has been fixed. The NNmodel will find out the relationship between inputs andthe fixed targets. The relationship is discovered fromthe data rather than according to the humanexpectation.
In addition to using pure time series, the inputs toNNs can also include some technical indicators suchas moving averages, momentum, RSI, etc. Theseindicators are in popular use amongst chartists andfloor traders. Certain indicators, such as movingaverages are one of the oldest technical indicators inexistence and they happen to be among the most usefulindicators. In practice, a trader may only focus on oneindicator and base on certain basic rules to trade.However, he needs other indicators to confirm hisfindings. For instance, if a short term, say 10 days,moving average crosses over a long term, say 30 days,moving average and both moving averages are in anupward direction, it is the time to go long. If the 10 daymoving average crosses below the 30 day movingaverage and both moving averages are directeddownward. most traders will consider this as a validsell signal. With fast calculation capability, moreindicators or combined indicator could be used.Step 3. Sensitivity Analysis
Sensitivity Analysis is used to find out which indicatoris more sensitive to the outputs. In other words, after asensitivity analysis, we can easily eliminate the lesssensitive variables from the input set. Usually,sensitivity analysis is used to reduce the number offundamental factors. NN or some other forecastingmodels are used in forecasting as the forecast target isbelieved to have relationship with many other series.Sometimes, the input variables may be correlated witheach other. Simply using all the available informationmay not always enhance the forecasting abilities. Thisis the same as the observation that complex models donot always out perform simple ones. Empirical studieshave shown that forecasts using econometric models
are not necessarily more accurate than those employingtime series methods [7]. If the ability of explainingeconomic or business phenomena which can increaseour understanding of relationships between variables isnot counted in, econometric models will be useless.Besides fundamental factors, technical indicators mayalso be used for sensitivity analysis. The basic ideaused here is that several trainings are conducted usingdifferent variables as inputs to a NN and theperformance of them are then compared. If there is nomuch difference on the performance with or without avariable, this variable is said to be of less significanceto the target and thus can be deleted from the inputs tothe network. Instead of changing the number of inputvariables, another approach changes the values of aparticular variable. Again, several trainings areconducted using perturbed variables. Each time, apositive or negative change is introduced into theoriginal value of a variable. If there is no muchdifference on the performance with or without changesof a variable, this variable is said to be of lesssignificance.
Overfitting is another major concern in the design of aNN. When there is no enough data available to train theNNs and the structure of NNs is too complex, the NNtends to memorize the data rather than to generalize fromit. Keeping the NN small is one way to avoid overfitting.One can prunes the network to a small size using thetechnical such as in [12].Step 4. Data Organization
The next step is Data Organization. In datapreprocessing step, we have chosen the prediction goaland the inputs that should be used. The historical datamay not necessarily contribute equally to the modelbuilding. We know that for certain periods the marketis more volatile than others, while some periods aremore stable than others. We can emphasize a certainperiod of data by feeding more times to the network oreliminate some data pattern from unimportant timeperiods. With the assumption that volatile periodscontribute more, we will sample more on volatileperiods or vise versa. We can only conclude this fromthe experiments of particular data set.
The basic assumption for time series forecasting isthat the pattern found from historical data will hold inthe future. Traditional regression forecasting modelbuilding uses all the data available. However, themodel obtained may not be suitable for the future.When training NNs, we can hold out a set of data, out-of-sample set apart from training. After the network isconfirmed, we use the out-of-sample data to test itsperformance. There are tradeoffs for testing andtraining. One should not say it is the best model unlesshe has tested it, but once one has tested it one has nottrained enough. In order to train NNs better, all the dataavailable should be used. The problem is that we haveno data to test the ``best'' model. In order to test the
model, we partition the data into three parts. The firsttwo parts are used to train (and validate) the NN whilethe third part of data is used to test the model. But thenetworks have not been trained enough as the third partis not used in training. The general partition rule fortraining, validation and testing set is 70%, 20% and10% respectively according to the authors' experience.Step 5. Model Construction
Model Construction step deals with NN architecture,hidden layers and activation function. Abackpropagation NN is decided by many factors,number of layers, number of nodes in each layer,weights between nodes and the activation function. Inour study, a hyperbolic tangent function is used as theactivation function for a backpropagation network.Similar to the situation of conventional forecastingmodels, it is not necessarily true that a complex NN, interms of more nodes and more hidden layers, gives abetter prediction. It is important not to have too manynodes in the hidden layer because this may allow theNN to learn by example only and not to generalize [1].When building a suitable NN for the financialapplication we have to balance between convergenceand generalization. We use a one hidden layer networkfor our experiment. We adopt a simple procedure ofdeciding the number of hidden nodes which is alsodetermined by the number of nodes in the input orpreceding layer. For a single hidden layer NN, thenumber of nodes in the hidden layer beingexperimented are in the order of n2, n2±1, n2±2,…, where n2stands for half of the input number. Theminimum number is 1 and the maximum number is thenumber of inputs, n, plus 1. In the case where a singlehidden layer is not satisfactory, an additional hiddenlayer is added. Then another round of similarexperiments for each of the single layer networks areconducted and now the new n2 stands for half of thenumber of nodes in the preceding layer. Besides thearchitecture itself the weight change is also quiteimportant. The learning rate and momentum rate canlead to different models.
The crucial point is the choice of the sigmoidactivation function of the processing neuron. There areseveral variations from the standard backpropagationalgorithm which aim at speeding up its relatively slowconvergence, avoiding local minima or improving itsgeneralization ability. e.g. the use of differentactivation functions other than the usual sigmoidfunction, the addition of a small positive offset to thederivative of the sigmoid function to avoid saturation atthe extremes, the use of a momentum term in theequation for the weight change. More detaileddiscussion can be found in [10] and [8].
Although the backpropagation algorithm does notguarantee optimal solution, Rumelhart [17] observedthat solutions obtained from the algorithm come closeto the optimal ones in their experiments. The accuracy
of approximation for NNs depends on the selection ofproper architecture and weights, however,backpropagation is only a local search algorithm andthus tends to become trapped in local optima. Randomselection of initial weights is a common approach. Ifthese initial weights are located on local grades, thealgorithm will likely become trapped at a localoptimum. Some researchers have tried to solve thisproblem by imposing constraints on the search space orby restructuring the architecture of the NNs. Forexample, parameters of the algorithm can be adjustedto affect the momentum of the search so that the searchwill break out of local optima and move toward theglobal solution. Another common method for findingthe best (perhaps global) solution usingbackpropagation is to restart the training at manyrandom points. Wang [26] proposes a “fix” for certainclassification problems by constraining the NN to onlyapproximate monotonic functions.
Another issue with the backpropagation network isthe choice of the number of hidden nodes in thenetwork. While trial-and-error is a common method todetermine the number of hidden nodes in a network,genetic algorithms are also often used to find theoptimum number [19]. In fact, in recent years, therehas been increasing use of genetic algorithms inconjunction with NNs. The application of geneticalgorithms to NNs has followed two separate butrelated paths. First, genetic algorithms have been usedto find the optimal network architectures for specifictasks. The second direction involves optimization ofthe NN using genetic algorithms for search. No matterhow sophisticated the NN technology, the design of aneural trading system remains an art. This art,especially in terms of training and configuring NNs fortrading, and be simplified through the use of geneticalgorithms.
Traditional backpropagation NNs training criterion isbased on goodness-of-fit which is also the mostpopular criterion for forecasting. However, in thecontext of financial time series forecasting, we are notonly concerned at how good the forecasts fit theirtarget. In order to increase the forecastability in termsof profit earning, Yao [23] proposes a profit basedadjusted weight factor for backpropagation networktraining. Instead of using the traditional least squareserror, a factor which contains the profit, direction, andtime information was added to the error function. Theresults show that the new approach does improve theforecastability of NN models, for the financialapplication domain.Step 6. Post Analysis
In Post Analysis step, experiment results will beanalysized to find out the possible relationship such asthe relations between higher profit and data characters.According to the performance of each segment, we candecide how long this model can be used. In other
words, how long we should retrain the NN model. Theknowledge gained from experiment will be used infuture practices. A major disadvantage of NNs is thattheir forecasts seem to come from a black box. Onecannot explain why the model made good predictions byexamining the model parameters and structures. Thismakes NN models hard to understand and difficult forsome managers to accept. Some work has been done tomake these models more understandable [2, 18].Step 7. Model Recommendation
As we know, certainty can be produced through alarge number of uncertainties. The behavior of anindividual could not be forecast with any degree ofcertainty, but on the other hand, the behavior of agroup of individuals could be forecast with a higherdegree of certainty. With only one case of success doesnot mean it will be successes in the future. In ourapproach, we do not just train the network once usingone data set. The final NN model we suggested inusing of forecasting is not a single network but a groupof networks. The networks are amongst the best modelwe have found using the same data set but differentsamples, segments, and architectures. The ModelRecommendation could be either Best-so-far orCommittee. Best-so-far is the best model for the testingdata and in the hope of that it is also the best model forthe future new data. As we cannot guarantee that theonly one model is suitable for the future, werecommend a group of models as a committee in ourfinal model. When forecast is made, instead of basingon one model, it can conclude from the majority of thecommittee. As on the average, the committeesuggestion for the historical data is superior to a singlemodel. Therefore the possibility for future correctnessis greater.
4. Conclusion Remarks
NNs are suitable for financial forecasting andmarketing analysis. They can be used for financial timeseries, such as stock exchange indices, foreignexchange rates forecasting. Research experiments showthat NN models can outperform conventional models inmost cases. As beating the markets is still a difficultproblem, if a NN cannot work as an alternative tool fortraditional forecasting and analysis models, at least itcan work as a complementary tool.
Some people treat NN as a panacea. However, thereis no cure-all medicine in the world. When applying aNN model in a real application, attention should betaken in every single step. The usage and training of aNN is an art.
One successful experiment says nothing for realapplication. There is always another model that existswhich is superior to the successful model for other datasets. Segmenting data into several sub sets and trainingwith a NN with the same architecture will assure that
the model will not just work for a single data set.Furthermore, building a model construction system willfree human being from the tedious trial-and-errorprocedures.
References
[1] E. B. Baum, D. Hassler, “What Size Net GivesValid Generalization?”, Neural Computation, 1,(1989), 151-160.
[2] J. M. Benitez, J. L. Castro & I Requena, “Areartificial neural networks black boxes?” IEEETransactions on Neural Networks, 8, 1997, 1156-1164.[3] J. L. Callen, C. C.Y. Kwan, P. C.Y. Yip, Y.F.Yuan, “Neural network forecasting of quarterlyaccounting earnings”, International Journal ofForecasting, (12)4, 1996 pp 475-482.
[4] A. J. Chapman, “Stock Market Trading SystemsThrough Neural Networks: Developing a Model”,International Journal of Applied Expert Systems, Vol.2, no. 2, 1994, pp88-100.
[5] P. R. Cohen, “A Survey of the Eighth NationalConference on Artificial Intelligence: Pulling Togetheror Pulling Apart? ”, AI Magazine, Vol. 12, No. 1, 1992,pp17-41.
[6] R. G. Donaldson, M. Kamstra, “An artificial neuralnetwork-GARCH model for international stock returnvolatility”, Journal of Empirical Finance, 4(1), 1997,pp 17-46.
[7] R. Fildes, ``Quantitative forecasting --- the state ofthe art: Causal models'', Journal of the operationalResearch Society, vol. 36, 1985, pp691-710
[8] R. Hecht-Nielsen, Neurocomputing, Addison-Wesley,1990.
[9] R. Heinkel, A. Kraus, ``Measuring Event Impactsin Thinly Traded Stocks'', Journal of Financial andQuantitative Analysis, March 1988.
[10] K. Hornik, M. Stinchcombe, White, “Multilayerfeedforward networks are universal approximators”,Neural Networks, 2(5), 1989, 359-366.
[11] T. Kolarik, G. Rudorfer, “Time series forecastingusing neural networks,” APL Quote Quad, 25(1), 1994,86-94.
[12] M.C. Mozer, P. Smolensky , “Using relevance toreduce network size automatically”, ConnectionScience, 1(1), 1989, pp3-16.
[13] H.-L. Poh, J. T. Yao, T. Jasic, “Neural Networksfor the Analysis and Forecasting of Advertising andPromotion Impact”, International Journal of IntelligentSystems in Accounting, Finance and Management, Vol.7, No. 4, 1998, pp253-268.
[14] L. Prechelt, “A Quantitative Study ofExperimental Evaluations of Neural Network LearningAlgorithms: Current Research Practice”, NeuralNetworks, 9(3), 1996, pp457-462.
[15] A. N. Refenes, M. Azema-Barac, L. Chen and S.A. Karoussos, “Currency Exchange Rate Prediction
and Neural Network Design Strategies”, NeuralComputing & Applications, No. 1, 1993, pp46-58.
[16] W. Remus, M. O'connor, “Neural Networks ForTime Series Forecasting”, in Principles of Forecasting: AHandbook for Researchers and Practitioners, J. ScottArmstrong, editor, Norwell, MA: Kluwer AcademicPublishers, 2001.
[17] D. E. Rumelhart, D.E., G.E. Hinton, R.J.Williams, “Learning Internal Representations by Errorpropagation”, in: Parallel Distributed Processing,Volume 1, D.E. Rumelhart, J.L. McClelland (Eds.),MIT Press, Cambridge, MA, 1986, pp318-362.
[18] R. Setiono, W.K. Leow and J.Y-L. Thong.“Opening the neural network blackbox: An algorithmfor extracting rules from function approximating neuralnetworks”, In Proceedings of International Conferenceon Information Systems 2000, Brisbane, Australia,December 10 - 13, pp176-186
[19] R. S. Sexton, R. E. Dorsey, J. D. Johnson, “Towardglobal optimization of neural networks: A comparison ofthe genetic algorithm and backpropagation”, DecisionSupport Systems, vol. 22, no. 2, pp. 171--185, 1998.
[20] W. F. Tichy, P. Lukowicz, L. Prechelt, A. Heinz,“Experimental Evaluation in Computer Science: AQuantitative Study”, Journal of Systems and Software,28(1):9-18. 1995.
[21] W. F. Tichy, “Should Computer ScientistsExperiment More? 16 Reasons to AvoidExperimentation”, IEEE Computer, 31(5), 1998, pp32-44.
[22] J. T. Yao, C. L. Tan and H.-L. Poh, “NeuralNetworks for Technical Analysis: A Study on KLCI”,International Journal of Theoretical and AppliedFinance, Vol. 2, No.2, 1999, pp221-241.
[23] J. T. Yao, C. L. Tan, “Time dependent DirectionalProfit Model for Financial Time Series Forecasting”,Proceedings of The IEEE-INNS-ENNS InternationalJoint Conference on Neural Networks, Como, Italy, 24-27 July 2000, Volume V, pp291-296.
[24] J. T. Yao, C. L. Tan, “A case study on using neuralnetworks to perform technical forecasting of forex”,Neurocomputing, Vol. 34, No. 1-4, 2000, pp79-98.
[25] J. T. Yao, C. L. Tan, Y. L. Li, “Option PricesForecasting Using Neural Networks”, Omega: TheInternational Journal of Management Science, Vol. 28,No. 4 2000, pp455-466.
[26] S. Wang, “The Unpredictability of Standard BackPropagation Neural Networks in ClassificationApplications,” Management Science 41(3), March1995, 555-559.
[27] D. H. Wolpert, W.G. Macready, “No Free LunchTheorems for Search”, Technical Report of The SantaFe Institute, SFI-TR-95-02-010, 1996.
[28]B. Zhou, “Estimatinzg the Variance ParameterFrom Noisy High Frequency Financial Data”, MITSloan School Working Paper, No. 3739, 1995.
因篇幅问题不能全部显示,请点此查看更多更全内容