tiempo redes neuronales con autorregresivas matlab machine-learning neural-network time-series prediction

matlab - con - redes neuronales autorregresivas



Matlab: ¿predicción de series temporales de redes neuronales? (2)

Para asegurar que esta pregunta no permanezca abierta mientras la respuesta ya está presente, publicaré el comentario que parece abordar el problema:

Créditos a @DanielTheRocketMan

Creo que debes trabajar en pasos:

  1. ver si los datos son estacionarios
  2. si no, lidiar con eso (por ejemplo, diferenciar los datos)
  3. probar el modelo más posible, por ejemplo, un modelo
  4. prueba el modelo no lineal, por ejemplo, nar
  5. ir a un modelo nn.

Antecedentes: estoy tratando de usar la caja de herramientas de la red neuronal de MATLAB para predecir los valores futuros de los datos. Lo ejecuto desde la GUI, pero también he incluido el código de salida a continuación.

Problema: mis valores predichos van por detrás de los valores reales en 2 períodos de tiempo, y no sé cómo ver realmente un valor "t + 1" (pronosticado).

Código:

% Solve an Autoregression Time-Series Problem with a NAR Neural Network % Script generated by NTSTOOL % Created Tue Mar 05 22:09:39 EST 2013 % % This script assumes this variable is defined: % % close_data - feedback time series. targetSeries = tonndata(close_data_short,false,false); % Create a Nonlinear Autoregressive Network feedbackDelays = 1:3; hiddenLayerSize = 10; net = narnet(feedbackDelays,hiddenLayerSize); % Choose Feedback Pre/Post-Processing Functions % Settings for feedback input are automatically applied to feedback output % For a list of all processing functions type: help nnprocess net.inputs{1}.processFcns = {''removeconstantrows'',''mapminmax''}; % Prepare the Data for Training and Simulation % The function PREPARETS prepares timeseries data for a particular network, % shifting time by the minimum amount to fill input states and layer states. % Using PREPARETS allows you to keep your original time series data unchanged, while % easily customizing it for networks with differing numbers of delays, with % open loop or closed loop feedback modes. [inputs,inputStates,layerStates,targets] = preparets(net,{},{},targetSeries); % Setup Division of Data for Training, Validation, Testing % For a list of all data division functions type: help nndivide net.divideFcn = ''dividerand''; % Divide data randomly net.divideMode = ''time''; % Divide up every value net.divideParam.trainRatio = 70/100; net.divideParam.valRatio = 15/100; net.divideParam.testRatio = 15/100; % Choose a Training Function % For a list of all training functions type: help nntrain net.trainFcn = ''trainlm''; % Levenberg-Marquardt % Choose a Performance Function % For a list of all performance functions type: help nnperformance net.performFcn = ''mse''; % Mean squared error % Choose Plot Functions % For a list of all plot functions type: help nnplot net.plotFcns = {''plotperform'',''plottrainstate'',''plotresponse'', ... ''ploterrcorr'', ''plotinerrcorr''}; % Train the Network [net,tr] = train(net,inputs,targets,inputStates,layerStates); % Test the Network outputs = net(inputs,inputStates,layerStates); errors = gsubtract(targets,outputs); performance = perform(net,targets,outputs) % Recalculate Training, Validation and Test Performance trainTargets = gmultiply(targets,tr.trainMask); valTargets = gmultiply(targets,tr.valMask); testTargets = gmultiply(targets,tr.testMask); trainPerformance = perform(net,trainTargets,outputs) valPerformance = perform(net,valTargets,outputs) testPerformance = perform(net,testTargets,outputs) % View the Network view(net) % Plots % Uncomment these lines to enable various plots. %figure, plotperform(tr) %figure, plottrainstate(tr) %figure, plotresponse(targets,outputs) %figure, ploterrcorr(errors) %figure, plotinerrcorr(inputs,errors) % Closed Loop Network % Use this network to do multi-step prediction. % The function CLOSELOOP replaces the feedback input with a direct % connection from the outout layer. netc = closeloop(net); [xc,xic,aic,tc] = preparets(netc,{},{},targetSeries); yc = netc(xc,xic,aic); perfc = perform(net,tc,yc) % Early Prediction Network % For some applications it helps to get the prediction a timestep early. % The original network returns predicted y(t+1) at the same time it is given y(t+1). % For some applications such as decision making, it would help to have predicted % y(t+1) once y(t) is available, but before the actual y(t+1) occurs. % The network can be made to return its output a timestep early by removing one delay % so that its minimal tap delay is now 0 instead of 1. The new network returns the % same outputs as the original network, but outputs are shifted left one timestep. nets = removedelay(net); [xs,xis,ais,ts] = preparets(nets,{},{},targetSeries); ys = nets(xs,xis,ais); closedLoopPerformance = perform(net,tc,yc)

Solución propuesta: creo que la respuesta se encuentra en la última parte del código "Early Prediction Network". Simplemente no estoy seguro de cómo eliminar ''un retraso''.

Pregunta adicional: ¿Hay alguna función que pueda obtenerse a partir de esto para que pueda usarla una y otra vez? ¿O tendría que seguir entrenando una vez que tenga el siguiente período de datos?


Prueba una versión más simple. He probado este código y este código funciona bien para mí.

inputs = X; %define input and target targets = y; hiddenLayerSize = 10; net = patternnet(hiddenLayerSize); % Set up Division of Data for Training, Validation, Testing net.divideParam.trainRatio = 70/100; net.divideParam.valRatio = 15/100; net.divideParam.testRatio = 15/100; [net,tr] = train(net,inputs,targets); outputss(x,:) = net(inputs); errors = gsubtract(targets,outputss); mse(errors)