Spark - Non-time-based windows are not supported on streaming DataFrames/Datasets;



.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty height:90px;width:728px;box-sizing:border-box;








1















I need to write Spark sql query with inner select and partition by. Problem is that I have AnalysisException.
I already spend few hours on this but with other approach I have no success.



Exception:



Exception in thread "main" org.apache.spark.sql.AnalysisException: Non-time-based windows are not supported on streaming DataFrames/Datasets;;
Window [sum(cast(_w0#41 as bigint)) windowspecdefinition(deviceId#28, timestamp#30 ASC NULLS FIRST, RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) AS grp#34L], [deviceId#28], [timestamp#30 ASC NULLS FIRST]
+- Project [currentTemperature#27, deviceId#28, status#29, timestamp#30, wantedTemperature#31, CASE WHEN (status#29 = cast(false as boolean)) THEN 1 ELSE 0 END AS _w0#41]


I assume that this is too complicated query to implement like this. But i don't know to to fix it.



 SparkSession spark = SparkUtils.getSparkSession("RawModel");

Dataset<RawModel> datasetMap = readFromKafka(spark);

datasetMap.registerTempTable("test");

Dataset<Row> res = datasetMap.sqlContext().sql("" +
" select deviceId, grp, avg(currentTemperature) as averageT, min(timestamp) as minTime ,max(timestamp) as maxTime, count(*) as countFrame " +
" from (select test.*, sum(case when status = 'false' then 1 else 0 end) over (partition by deviceId order by timestamp) as grp " +
" from test " +
" ) test " +
" group by deviceid, grp ");


Any suggestion would be very appreciated.
Thank you.










share|improve this question
























  • I am also getting same error, did you get any solution.

    – Nirmal_stack
    Dec 19 '18 at 14:33











  • I did not. I implemented from beginning with different approach. I used custom aggregation.

    – Raskolnikov
    Dec 19 '18 at 15:17











  • You mean, pandas_udf ?

    – Nirmal_stack
    Dec 19 '18 at 15:35

















1















I need to write Spark sql query with inner select and partition by. Problem is that I have AnalysisException.
I already spend few hours on this but with other approach I have no success.



Exception:



Exception in thread "main" org.apache.spark.sql.AnalysisException: Non-time-based windows are not supported on streaming DataFrames/Datasets;;
Window [sum(cast(_w0#41 as bigint)) windowspecdefinition(deviceId#28, timestamp#30 ASC NULLS FIRST, RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) AS grp#34L], [deviceId#28], [timestamp#30 ASC NULLS FIRST]
+- Project [currentTemperature#27, deviceId#28, status#29, timestamp#30, wantedTemperature#31, CASE WHEN (status#29 = cast(false as boolean)) THEN 1 ELSE 0 END AS _w0#41]


I assume that this is too complicated query to implement like this. But i don't know to to fix it.



 SparkSession spark = SparkUtils.getSparkSession("RawModel");

Dataset<RawModel> datasetMap = readFromKafka(spark);

datasetMap.registerTempTable("test");

Dataset<Row> res = datasetMap.sqlContext().sql("" +
" select deviceId, grp, avg(currentTemperature) as averageT, min(timestamp) as minTime ,max(timestamp) as maxTime, count(*) as countFrame " +
" from (select test.*, sum(case when status = 'false' then 1 else 0 end) over (partition by deviceId order by timestamp) as grp " +
" from test " +
" ) test " +
" group by deviceid, grp ");


Any suggestion would be very appreciated.
Thank you.










share|improve this question
























  • I am also getting same error, did you get any solution.

    – Nirmal_stack
    Dec 19 '18 at 14:33











  • I did not. I implemented from beginning with different approach. I used custom aggregation.

    – Raskolnikov
    Dec 19 '18 at 15:17











  • You mean, pandas_udf ?

    – Nirmal_stack
    Dec 19 '18 at 15:35













1












1








1








I need to write Spark sql query with inner select and partition by. Problem is that I have AnalysisException.
I already spend few hours on this but with other approach I have no success.



Exception:



Exception in thread "main" org.apache.spark.sql.AnalysisException: Non-time-based windows are not supported on streaming DataFrames/Datasets;;
Window [sum(cast(_w0#41 as bigint)) windowspecdefinition(deviceId#28, timestamp#30 ASC NULLS FIRST, RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) AS grp#34L], [deviceId#28], [timestamp#30 ASC NULLS FIRST]
+- Project [currentTemperature#27, deviceId#28, status#29, timestamp#30, wantedTemperature#31, CASE WHEN (status#29 = cast(false as boolean)) THEN 1 ELSE 0 END AS _w0#41]


I assume that this is too complicated query to implement like this. But i don't know to to fix it.



 SparkSession spark = SparkUtils.getSparkSession("RawModel");

Dataset<RawModel> datasetMap = readFromKafka(spark);

datasetMap.registerTempTable("test");

Dataset<Row> res = datasetMap.sqlContext().sql("" +
" select deviceId, grp, avg(currentTemperature) as averageT, min(timestamp) as minTime ,max(timestamp) as maxTime, count(*) as countFrame " +
" from (select test.*, sum(case when status = 'false' then 1 else 0 end) over (partition by deviceId order by timestamp) as grp " +
" from test " +
" ) test " +
" group by deviceid, grp ");


Any suggestion would be very appreciated.
Thank you.










share|improve this question
















I need to write Spark sql query with inner select and partition by. Problem is that I have AnalysisException.
I already spend few hours on this but with other approach I have no success.



Exception:



Exception in thread "main" org.apache.spark.sql.AnalysisException: Non-time-based windows are not supported on streaming DataFrames/Datasets;;
Window [sum(cast(_w0#41 as bigint)) windowspecdefinition(deviceId#28, timestamp#30 ASC NULLS FIRST, RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) AS grp#34L], [deviceId#28], [timestamp#30 ASC NULLS FIRST]
+- Project [currentTemperature#27, deviceId#28, status#29, timestamp#30, wantedTemperature#31, CASE WHEN (status#29 = cast(false as boolean)) THEN 1 ELSE 0 END AS _w0#41]


I assume that this is too complicated query to implement like this. But i don't know to to fix it.



 SparkSession spark = SparkUtils.getSparkSession("RawModel");

Dataset<RawModel> datasetMap = readFromKafka(spark);

datasetMap.registerTempTable("test");

Dataset<Row> res = datasetMap.sqlContext().sql("" +
" select deviceId, grp, avg(currentTemperature) as averageT, min(timestamp) as minTime ,max(timestamp) as maxTime, count(*) as countFrame " +
" from (select test.*, sum(case when status = 'false' then 1 else 0 end) over (partition by deviceId order by timestamp) as grp " +
" from test " +
" ) test " +
" group by deviceid, grp ");


Any suggestion would be very appreciated.
Thank you.







java apache-spark apache-spark-sql spark-streaming






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Nov 14 '18 at 8:48







Raskolnikov

















asked Nov 14 '18 at 7:09









RaskolnikovRaskolnikov

85632055




85632055












  • I am also getting same error, did you get any solution.

    – Nirmal_stack
    Dec 19 '18 at 14:33











  • I did not. I implemented from beginning with different approach. I used custom aggregation.

    – Raskolnikov
    Dec 19 '18 at 15:17











  • You mean, pandas_udf ?

    – Nirmal_stack
    Dec 19 '18 at 15:35

















  • I am also getting same error, did you get any solution.

    – Nirmal_stack
    Dec 19 '18 at 14:33











  • I did not. I implemented from beginning with different approach. I used custom aggregation.

    – Raskolnikov
    Dec 19 '18 at 15:17











  • You mean, pandas_udf ?

    – Nirmal_stack
    Dec 19 '18 at 15:35
















I am also getting same error, did you get any solution.

– Nirmal_stack
Dec 19 '18 at 14:33





I am also getting same error, did you get any solution.

– Nirmal_stack
Dec 19 '18 at 14:33













I did not. I implemented from beginning with different approach. I used custom aggregation.

– Raskolnikov
Dec 19 '18 at 15:17





I did not. I implemented from beginning with different approach. I used custom aggregation.

– Raskolnikov
Dec 19 '18 at 15:17













You mean, pandas_udf ?

– Nirmal_stack
Dec 19 '18 at 15:35





You mean, pandas_udf ?

– Nirmal_stack
Dec 19 '18 at 15:35












0






active

oldest

votes












Your Answer






StackExchange.ifUsing("editor", function ()
StackExchange.using("externalEditor", function ()
StackExchange.using("snippets", function ()
StackExchange.snippets.init();
);
);
, "code-snippets");

StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "1"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);

else
createEditor();

);

function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);



);













draft saved

draft discarded


















StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53294809%2fspark-non-time-based-windows-are-not-supported-on-streaming-dataframes-dataset%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown

























0






active

oldest

votes








0






active

oldest

votes









active

oldest

votes






active

oldest

votes















draft saved

draft discarded
















































Thanks for contributing an answer to Stack Overflow!


  • Please be sure to answer the question. Provide details and share your research!

But avoid


  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.

To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53294809%2fspark-non-time-based-windows-are-not-supported-on-streaming-dataframes-dataset%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

𛂒𛀶,𛀽𛀑𛂀𛃧𛂓𛀙𛃆𛃑𛃷𛂟𛁡𛀢𛀟𛁤𛂽𛁕𛁪𛂟𛂯,𛁞𛂧𛀴𛁄𛁠𛁼𛂿𛀤 𛂘,𛁺𛂾𛃭𛃭𛃵𛀺,𛂣𛃍𛂖𛃶 𛀸𛃀𛂖𛁶𛁏𛁚 𛂢𛂞 𛁰𛂆𛀔,𛁸𛀽𛁓𛃋𛂇𛃧𛀧𛃣𛂐𛃇,𛂂𛃻𛃲𛁬𛃞𛀧𛃃𛀅 𛂭𛁠𛁡𛃇𛀷𛃓𛁥,𛁙𛁘𛁞𛃸𛁸𛃣𛁜,𛂛,𛃿,𛁯𛂘𛂌𛃛𛁱𛃌𛂈𛂇 𛁊𛃲,𛀕𛃴𛀜 𛀶𛂆𛀶𛃟𛂉𛀣,𛂐𛁞𛁾 𛁷𛂑𛁳𛂯𛀬𛃅,𛃶𛁼

Crossroads (UK TV series)

ữḛḳṊẴ ẋ,Ẩṙ,ỹḛẪẠứụỿṞṦ,Ṉẍừ,ứ Ị,Ḵ,ṏ ṇỪḎḰṰọửḊ ṾḨḮữẑỶṑỗḮṣṉẃ Ữẩụ,ṓ,ḹẕḪḫỞṿḭ ỒṱṨẁṋṜ ḅẈ ṉ ứṀḱṑỒḵ,ḏ,ḊḖỹẊ Ẻḷổ,ṥ ẔḲẪụḣể Ṱ ḭỏựẶ Ồ Ṩ,ẂḿṡḾồ ỗṗṡịṞẤḵṽẃ ṸḒẄẘ,ủẞẵṦṟầṓế