Transparent cache when querying time series with Apache Spark










2














We have time series data, as daily parquet file of 3 GB in HDFS (hdfs:///data/year=X/month=X/day=X/data.parquet.gz), warehouse'd by Hive as data table.



All night, we run SQL queries to generate reports, with Apache Spark:



(1) SELECT date, count(*) from data GROUP BY date


(of course we have more complex query ^^)



I notice Apache Spark will run the query on all our data set (which is normal), but I would like to re-use data of previous day if possible since the previous data never change.



Solution in place



I can achieve this by doing an incremental consolidation:



(2) INSERT INTO consolidation SELECT date, count(*) FROM data WHERE date="yesterday"



then run the query against it ((3) SELECT date, value FROM consolidation)



Transparent cache I want



I am wondering if it's possible to have this behavior with the query (1) maybe by hacking how Spark is generating the logical plan, I dont know.










share|improve this question



















  • 1




    What you want is a materialized view and requires (a) massive R&D efforts and (b) a way to detect which segments of data have changed since large materialization. Since Spark is not a DBMS and cannot ensure the data files have not changed in the meantime, it can't do it. For the record, Horton (RiP) tries to introduce that feature in Hive 3 but only for "ACID tables" that are under full control of Hive. For the record also, the MVs in Oracle have been there for a decade and are still a bit tricky to work with...
    – Samson Scharfrichter
    Nov 10 '18 at 10:40











  • Some questions: 1. Is each days data appended to previous data or each wriiten to separate files? 2. Can you change the writing application? 3. Do you just need data for 2 days together or any range? 4. Can you use the data frame API?
    – ookboy24
    Nov 10 '18 at 16:41










  • If you have count distinct over periods greater than one day, reusing pre-aggregated results from previous days would be problematic.
    – alexeipab
    Nov 10 '18 at 18:43










  • Data IS appended in new folder, previous data never changed , thats why i think its possible tout achieve this "transparently"
    – Thomas Decaux
    Nov 12 '18 at 13:43















2














We have time series data, as daily parquet file of 3 GB in HDFS (hdfs:///data/year=X/month=X/day=X/data.parquet.gz), warehouse'd by Hive as data table.



All night, we run SQL queries to generate reports, with Apache Spark:



(1) SELECT date, count(*) from data GROUP BY date


(of course we have more complex query ^^)



I notice Apache Spark will run the query on all our data set (which is normal), but I would like to re-use data of previous day if possible since the previous data never change.



Solution in place



I can achieve this by doing an incremental consolidation:



(2) INSERT INTO consolidation SELECT date, count(*) FROM data WHERE date="yesterday"



then run the query against it ((3) SELECT date, value FROM consolidation)



Transparent cache I want



I am wondering if it's possible to have this behavior with the query (1) maybe by hacking how Spark is generating the logical plan, I dont know.










share|improve this question



















  • 1




    What you want is a materialized view and requires (a) massive R&D efforts and (b) a way to detect which segments of data have changed since large materialization. Since Spark is not a DBMS and cannot ensure the data files have not changed in the meantime, it can't do it. For the record, Horton (RiP) tries to introduce that feature in Hive 3 but only for "ACID tables" that are under full control of Hive. For the record also, the MVs in Oracle have been there for a decade and are still a bit tricky to work with...
    – Samson Scharfrichter
    Nov 10 '18 at 10:40











  • Some questions: 1. Is each days data appended to previous data or each wriiten to separate files? 2. Can you change the writing application? 3. Do you just need data for 2 days together or any range? 4. Can you use the data frame API?
    – ookboy24
    Nov 10 '18 at 16:41










  • If you have count distinct over periods greater than one day, reusing pre-aggregated results from previous days would be problematic.
    – alexeipab
    Nov 10 '18 at 18:43










  • Data IS appended in new folder, previous data never changed , thats why i think its possible tout achieve this "transparently"
    – Thomas Decaux
    Nov 12 '18 at 13:43













2












2








2


1





We have time series data, as daily parquet file of 3 GB in HDFS (hdfs:///data/year=X/month=X/day=X/data.parquet.gz), warehouse'd by Hive as data table.



All night, we run SQL queries to generate reports, with Apache Spark:



(1) SELECT date, count(*) from data GROUP BY date


(of course we have more complex query ^^)



I notice Apache Spark will run the query on all our data set (which is normal), but I would like to re-use data of previous day if possible since the previous data never change.



Solution in place



I can achieve this by doing an incremental consolidation:



(2) INSERT INTO consolidation SELECT date, count(*) FROM data WHERE date="yesterday"



then run the query against it ((3) SELECT date, value FROM consolidation)



Transparent cache I want



I am wondering if it's possible to have this behavior with the query (1) maybe by hacking how Spark is generating the logical plan, I dont know.










share|improve this question















We have time series data, as daily parquet file of 3 GB in HDFS (hdfs:///data/year=X/month=X/day=X/data.parquet.gz), warehouse'd by Hive as data table.



All night, we run SQL queries to generate reports, with Apache Spark:



(1) SELECT date, count(*) from data GROUP BY date


(of course we have more complex query ^^)



I notice Apache Spark will run the query on all our data set (which is normal), but I would like to re-use data of previous day if possible since the previous data never change.



Solution in place



I can achieve this by doing an incremental consolidation:



(2) INSERT INTO consolidation SELECT date, count(*) FROM data WHERE date="yesterday"



then run the query against it ((3) SELECT date, value FROM consolidation)



Transparent cache I want



I am wondering if it's possible to have this behavior with the query (1) maybe by hacking how Spark is generating the logical plan, I dont know.







apache-spark apache-spark-sql






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Nov 10 '18 at 13:12









cricket_007

79.3k1142109




79.3k1142109










asked Nov 10 '18 at 9:18









Thomas Decaux

12.6k25660




12.6k25660







  • 1




    What you want is a materialized view and requires (a) massive R&D efforts and (b) a way to detect which segments of data have changed since large materialization. Since Spark is not a DBMS and cannot ensure the data files have not changed in the meantime, it can't do it. For the record, Horton (RiP) tries to introduce that feature in Hive 3 but only for "ACID tables" that are under full control of Hive. For the record also, the MVs in Oracle have been there for a decade and are still a bit tricky to work with...
    – Samson Scharfrichter
    Nov 10 '18 at 10:40











  • Some questions: 1. Is each days data appended to previous data or each wriiten to separate files? 2. Can you change the writing application? 3. Do you just need data for 2 days together or any range? 4. Can you use the data frame API?
    – ookboy24
    Nov 10 '18 at 16:41










  • If you have count distinct over periods greater than one day, reusing pre-aggregated results from previous days would be problematic.
    – alexeipab
    Nov 10 '18 at 18:43










  • Data IS appended in new folder, previous data never changed , thats why i think its possible tout achieve this "transparently"
    – Thomas Decaux
    Nov 12 '18 at 13:43












  • 1




    What you want is a materialized view and requires (a) massive R&D efforts and (b) a way to detect which segments of data have changed since large materialization. Since Spark is not a DBMS and cannot ensure the data files have not changed in the meantime, it can't do it. For the record, Horton (RiP) tries to introduce that feature in Hive 3 but only for "ACID tables" that are under full control of Hive. For the record also, the MVs in Oracle have been there for a decade and are still a bit tricky to work with...
    – Samson Scharfrichter
    Nov 10 '18 at 10:40











  • Some questions: 1. Is each days data appended to previous data or each wriiten to separate files? 2. Can you change the writing application? 3. Do you just need data for 2 days together or any range? 4. Can you use the data frame API?
    – ookboy24
    Nov 10 '18 at 16:41










  • If you have count distinct over periods greater than one day, reusing pre-aggregated results from previous days would be problematic.
    – alexeipab
    Nov 10 '18 at 18:43










  • Data IS appended in new folder, previous data never changed , thats why i think its possible tout achieve this "transparently"
    – Thomas Decaux
    Nov 12 '18 at 13:43







1




1




What you want is a materialized view and requires (a) massive R&D efforts and (b) a way to detect which segments of data have changed since large materialization. Since Spark is not a DBMS and cannot ensure the data files have not changed in the meantime, it can't do it. For the record, Horton (RiP) tries to introduce that feature in Hive 3 but only for "ACID tables" that are under full control of Hive. For the record also, the MVs in Oracle have been there for a decade and are still a bit tricky to work with...
– Samson Scharfrichter
Nov 10 '18 at 10:40





What you want is a materialized view and requires (a) massive R&D efforts and (b) a way to detect which segments of data have changed since large materialization. Since Spark is not a DBMS and cannot ensure the data files have not changed in the meantime, it can't do it. For the record, Horton (RiP) tries to introduce that feature in Hive 3 but only for "ACID tables" that are under full control of Hive. For the record also, the MVs in Oracle have been there for a decade and are still a bit tricky to work with...
– Samson Scharfrichter
Nov 10 '18 at 10:40













Some questions: 1. Is each days data appended to previous data or each wriiten to separate files? 2. Can you change the writing application? 3. Do you just need data for 2 days together or any range? 4. Can you use the data frame API?
– ookboy24
Nov 10 '18 at 16:41




Some questions: 1. Is each days data appended to previous data or each wriiten to separate files? 2. Can you change the writing application? 3. Do you just need data for 2 days together or any range? 4. Can you use the data frame API?
– ookboy24
Nov 10 '18 at 16:41












If you have count distinct over periods greater than one day, reusing pre-aggregated results from previous days would be problematic.
– alexeipab
Nov 10 '18 at 18:43




If you have count distinct over periods greater than one day, reusing pre-aggregated results from previous days would be problematic.
– alexeipab
Nov 10 '18 at 18:43












Data IS appended in new folder, previous data never changed , thats why i think its possible tout achieve this "transparently"
– Thomas Decaux
Nov 12 '18 at 13:43




Data IS appended in new folder, previous data never changed , thats why i think its possible tout achieve this "transparently"
– Thomas Decaux
Nov 12 '18 at 13:43












0






active

oldest

votes











Your Answer






StackExchange.ifUsing("editor", function ()
StackExchange.using("externalEditor", function ()
StackExchange.using("snippets", function ()
StackExchange.snippets.init();
);
);
, "code-snippets");

StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "1"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);

else
createEditor();

);

function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);



);













draft saved

draft discarded


















StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53237555%2ftransparent-cache-when-querying-time-series-with-apache-spark%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown

























0






active

oldest

votes








0






active

oldest

votes









active

oldest

votes






active

oldest

votes















draft saved

draft discarded
















































Thanks for contributing an answer to Stack Overflow!


  • Please be sure to answer the question. Provide details and share your research!

But avoid


  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.

To learn more, see our tips on writing great answers.





Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


Please pay close attention to the following guidance:


  • Please be sure to answer the question. Provide details and share your research!

But avoid


  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.

To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53237555%2ftransparent-cache-when-querying-time-series-with-apache-spark%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

𛂒𛀶,𛀽𛀑𛂀𛃧𛂓𛀙𛃆𛃑𛃷𛂟𛁡𛀢𛀟𛁤𛂽𛁕𛁪𛂟𛂯,𛁞𛂧𛀴𛁄𛁠𛁼𛂿𛀤 𛂘,𛁺𛂾𛃭𛃭𛃵𛀺,𛂣𛃍𛂖𛃶 𛀸𛃀𛂖𛁶𛁏𛁚 𛂢𛂞 𛁰𛂆𛀔,𛁸𛀽𛁓𛃋𛂇𛃧𛀧𛃣𛂐𛃇,𛂂𛃻𛃲𛁬𛃞𛀧𛃃𛀅 𛂭𛁠𛁡𛃇𛀷𛃓𛁥,𛁙𛁘𛁞𛃸𛁸𛃣𛁜,𛂛,𛃿,𛁯𛂘𛂌𛃛𛁱𛃌𛂈𛂇 𛁊𛃲,𛀕𛃴𛀜 𛀶𛂆𛀶𛃟𛂉𛀣,𛂐𛁞𛁾 𛁷𛂑𛁳𛂯𛀬𛃅,𛃶𛁼

Edmonton

Crossroads (UK TV series)