How to get data in proper shape to feed to LSTM layer in keras for sequence to sequence prediction
up vote
0
down vote
favorite
I have dataframe as following for time series where SETTLEMENTDATE
is index. I want to take first row, i.e 2018-11-01 14:30:00
and values of T_1
, T_2
, T_3
, T_4
, T_5
, T_6
as a sequence and predict sequence of DE_1
, DE_2
, DE_3
, DE_4
.
I am using keras for Sequence to sequence time series using LSTM. I tried to take T_1
to T_6
as input dataframe 'X'
and DE_1
to DE_4
as output dataframe 'y'
. I reshaped it using X = np.array(X)
y = np.array(y)
and then X = X.reshape(4,6,1)
and y = y.reshape(4,4,1)
to feed to batch_input_shape()
but it does not work.
How to get data in proper shape to feed to LSTM layer?
T_1 T_2 T_3 T_4 T_5 T_6 DE_1 DE_2 DE_3 DE_4
SETTLEMENTDATE
2018-11-01 14:30:00 1645.82 1623.23 1619.09 1581.94 1538.20 1543.48 1624.23 1722.85 1773.77 1807.04
2018-11-01 15:00:00 1628.60 1645.82 1623.23 1619.09 1581.94 1538.20 1722.85 1773.77 1807.04 1873.53
2018-11-01 15:30:00 1624.23 1628.60 1645.82 1623.23 1619.09 1581.94 1773.77 1807.04 1873.53 1889.06
2018-11-01 16:00:00 1722.85 1624.23 1628.60 1645.82 1623.23 1619.09 1807.04 1873.53 1889.06 1924.57
python-3.x keras time-series lstm recurrent-neural-network
add a comment |
up vote
0
down vote
favorite
I have dataframe as following for time series where SETTLEMENTDATE
is index. I want to take first row, i.e 2018-11-01 14:30:00
and values of T_1
, T_2
, T_3
, T_4
, T_5
, T_6
as a sequence and predict sequence of DE_1
, DE_2
, DE_3
, DE_4
.
I am using keras for Sequence to sequence time series using LSTM. I tried to take T_1
to T_6
as input dataframe 'X'
and DE_1
to DE_4
as output dataframe 'y'
. I reshaped it using X = np.array(X)
y = np.array(y)
and then X = X.reshape(4,6,1)
and y = y.reshape(4,4,1)
to feed to batch_input_shape()
but it does not work.
How to get data in proper shape to feed to LSTM layer?
T_1 T_2 T_3 T_4 T_5 T_6 DE_1 DE_2 DE_3 DE_4
SETTLEMENTDATE
2018-11-01 14:30:00 1645.82 1623.23 1619.09 1581.94 1538.20 1543.48 1624.23 1722.85 1773.77 1807.04
2018-11-01 15:00:00 1628.60 1645.82 1623.23 1619.09 1581.94 1538.20 1722.85 1773.77 1807.04 1873.53
2018-11-01 15:30:00 1624.23 1628.60 1645.82 1623.23 1619.09 1581.94 1773.77 1807.04 1873.53 1889.06
2018-11-01 16:00:00 1722.85 1624.23 1628.60 1645.82 1623.23 1619.09 1807.04 1873.53 1889.06 1924.57
python-3.x keras time-series lstm recurrent-neural-network
You have to show us how you have set up the LSTM layer because input shape depends on if you have setreturn_state
orreturn_sequences
toTrue
.
– Novak
Nov 9 at 9:34
Hi @Novak, I have given return_sequences=True.
– Nikhil Mangire
Nov 9 at 14:33
add a comment |
up vote
0
down vote
favorite
up vote
0
down vote
favorite
I have dataframe as following for time series where SETTLEMENTDATE
is index. I want to take first row, i.e 2018-11-01 14:30:00
and values of T_1
, T_2
, T_3
, T_4
, T_5
, T_6
as a sequence and predict sequence of DE_1
, DE_2
, DE_3
, DE_4
.
I am using keras for Sequence to sequence time series using LSTM. I tried to take T_1
to T_6
as input dataframe 'X'
and DE_1
to DE_4
as output dataframe 'y'
. I reshaped it using X = np.array(X)
y = np.array(y)
and then X = X.reshape(4,6,1)
and y = y.reshape(4,4,1)
to feed to batch_input_shape()
but it does not work.
How to get data in proper shape to feed to LSTM layer?
T_1 T_2 T_3 T_4 T_5 T_6 DE_1 DE_2 DE_3 DE_4
SETTLEMENTDATE
2018-11-01 14:30:00 1645.82 1623.23 1619.09 1581.94 1538.20 1543.48 1624.23 1722.85 1773.77 1807.04
2018-11-01 15:00:00 1628.60 1645.82 1623.23 1619.09 1581.94 1538.20 1722.85 1773.77 1807.04 1873.53
2018-11-01 15:30:00 1624.23 1628.60 1645.82 1623.23 1619.09 1581.94 1773.77 1807.04 1873.53 1889.06
2018-11-01 16:00:00 1722.85 1624.23 1628.60 1645.82 1623.23 1619.09 1807.04 1873.53 1889.06 1924.57
python-3.x keras time-series lstm recurrent-neural-network
I have dataframe as following for time series where SETTLEMENTDATE
is index. I want to take first row, i.e 2018-11-01 14:30:00
and values of T_1
, T_2
, T_3
, T_4
, T_5
, T_6
as a sequence and predict sequence of DE_1
, DE_2
, DE_3
, DE_4
.
I am using keras for Sequence to sequence time series using LSTM. I tried to take T_1
to T_6
as input dataframe 'X'
and DE_1
to DE_4
as output dataframe 'y'
. I reshaped it using X = np.array(X)
y = np.array(y)
and then X = X.reshape(4,6,1)
and y = y.reshape(4,4,1)
to feed to batch_input_shape()
but it does not work.
How to get data in proper shape to feed to LSTM layer?
T_1 T_2 T_3 T_4 T_5 T_6 DE_1 DE_2 DE_3 DE_4
SETTLEMENTDATE
2018-11-01 14:30:00 1645.82 1623.23 1619.09 1581.94 1538.20 1543.48 1624.23 1722.85 1773.77 1807.04
2018-11-01 15:00:00 1628.60 1645.82 1623.23 1619.09 1581.94 1538.20 1722.85 1773.77 1807.04 1873.53
2018-11-01 15:30:00 1624.23 1628.60 1645.82 1623.23 1619.09 1581.94 1773.77 1807.04 1873.53 1889.06
2018-11-01 16:00:00 1722.85 1624.23 1628.60 1645.82 1623.23 1619.09 1807.04 1873.53 1889.06 1924.57
python-3.x keras time-series lstm recurrent-neural-network
python-3.x keras time-series lstm recurrent-neural-network
edited Nov 9 at 9:52
Novak
68549
68549
asked Nov 9 at 6:41
Nikhil Mangire
609
609
You have to show us how you have set up the LSTM layer because input shape depends on if you have setreturn_state
orreturn_sequences
toTrue
.
– Novak
Nov 9 at 9:34
Hi @Novak, I have given return_sequences=True.
– Nikhil Mangire
Nov 9 at 14:33
add a comment |
You have to show us how you have set up the LSTM layer because input shape depends on if you have setreturn_state
orreturn_sequences
toTrue
.
– Novak
Nov 9 at 9:34
Hi @Novak, I have given return_sequences=True.
– Nikhil Mangire
Nov 9 at 14:33
You have to show us how you have set up the LSTM layer because input shape depends on if you have set
return_state
or return_sequences
to True
.– Novak
Nov 9 at 9:34
You have to show us how you have set up the LSTM layer because input shape depends on if you have set
return_state
or return_sequences
to True
.– Novak
Nov 9 at 9:34
Hi @Novak, I have given return_sequences=True.
– Nikhil Mangire
Nov 9 at 14:33
Hi @Novak, I have given return_sequences=True.
– Nikhil Mangire
Nov 9 at 14:33
add a comment |
1 Answer
1
active
oldest
votes
up vote
0
down vote
LSTM accepts two arguments: input_shape
and batch_input_shape
. The difference is in convention that input_shape
does not contain the batch size, while batch_input_shape
is the full input shape including the batch size.
LSTM layer is a recurrent layer, hence it expects a 3-dimensional input (batch_size, timesteps, input_dim)
. That's why the correct specification is input_shape=(6, 1)
or batch_input_shape=(BATCH_SIZE, 6, 1)
, where BATCH_SIZE
is the size of your batch.
I hope it helps :)
add a comment |
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
0
down vote
LSTM accepts two arguments: input_shape
and batch_input_shape
. The difference is in convention that input_shape
does not contain the batch size, while batch_input_shape
is the full input shape including the batch size.
LSTM layer is a recurrent layer, hence it expects a 3-dimensional input (batch_size, timesteps, input_dim)
. That's why the correct specification is input_shape=(6, 1)
or batch_input_shape=(BATCH_SIZE, 6, 1)
, where BATCH_SIZE
is the size of your batch.
I hope it helps :)
add a comment |
up vote
0
down vote
LSTM accepts two arguments: input_shape
and batch_input_shape
. The difference is in convention that input_shape
does not contain the batch size, while batch_input_shape
is the full input shape including the batch size.
LSTM layer is a recurrent layer, hence it expects a 3-dimensional input (batch_size, timesteps, input_dim)
. That's why the correct specification is input_shape=(6, 1)
or batch_input_shape=(BATCH_SIZE, 6, 1)
, where BATCH_SIZE
is the size of your batch.
I hope it helps :)
add a comment |
up vote
0
down vote
up vote
0
down vote
LSTM accepts two arguments: input_shape
and batch_input_shape
. The difference is in convention that input_shape
does not contain the batch size, while batch_input_shape
is the full input shape including the batch size.
LSTM layer is a recurrent layer, hence it expects a 3-dimensional input (batch_size, timesteps, input_dim)
. That's why the correct specification is input_shape=(6, 1)
or batch_input_shape=(BATCH_SIZE, 6, 1)
, where BATCH_SIZE
is the size of your batch.
I hope it helps :)
LSTM accepts two arguments: input_shape
and batch_input_shape
. The difference is in convention that input_shape
does not contain the batch size, while batch_input_shape
is the full input shape including the batch size.
LSTM layer is a recurrent layer, hence it expects a 3-dimensional input (batch_size, timesteps, input_dim)
. That's why the correct specification is input_shape=(6, 1)
or batch_input_shape=(BATCH_SIZE, 6, 1)
, where BATCH_SIZE
is the size of your batch.
I hope it helps :)
answered Nov 9 at 15:57
Novak
68549
68549
add a comment |
add a comment |
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Some of your past answers have not been well-received, and you're in danger of being blocked from answering.
Please pay close attention to the following guidance:
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53220979%2fhow-to-get-data-in-proper-shape-to-feed-to-lstm-layer-in-keras-for-sequence-to-s%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
You have to show us how you have set up the LSTM layer because input shape depends on if you have set
return_state
orreturn_sequences
toTrue
.– Novak
Nov 9 at 9:34
Hi @Novak, I have given return_sequences=True.
– Nikhil Mangire
Nov 9 at 14:33