时间序列预测模型实战案例(LSTM)(Python)(深度学习)时间序列预测(包括运行代码以及代码讲解)
import numpy as np
from keras.models import Sequential
from keras.layers import LSTM, Dense
from matplotlib import pyplot as plt
# 假设的数据集
data = np.linspace(0, 100, 100)
data = np.sin(data)
# 将数据集划分为训练集和测试集
train_size = int(len(data) * 0.7)
test_size = len(data) - train_size
train, test = data[0:train_size], data[train_size:]
# 对数据进行归一化处理
scaler = MinMaxScaler(feature_range=(0, 1))
train_scaled = scaler.fit_transform(train.reshape(-1, 1))
test_scaled = scaler.transform(test.reshape(-1, 1))
# 创建LSTM模型
model = Sequential()
model.add(LSTM(50, input_shape=(None, 1)))
model.add(Dense(1))
model.compile(loss='mean_squared_error', optimizer='adam')
# 对模型进行训练
history = model.fit(train_scaled, train, epochs=100, batch_size=1, verbose=2, validation_split=0.05)
# 对模型进行预测
train_predict = model.predict(train_scaled)
test_predict = model.predict(test_scaled)
train_predict = scaler.inverse_transform(train_predict)
test_predict = scaler.inverse_transform(test_predict)
# 画出结果图
plt.plot(test, color='blue', label='Real')
plt.plot(train_predict, color='red', label='Train Predict')
plt.plot(test_predict, color='green', label='Test Predict')
plt.legend()
plt.show()
这段代码展示了如何使用Keras库在Python中创建和训练一个LSTM模型,并对数据进行预测和可视化。代码中包含数据归一化和模型训练的步骤,并使用matplotlib库来展示预测结果。
评论已关闭