9

I have a spark Time Series data frame. I would like to split it into 80-20 (train-test). As this is a time series data frame, I don't want to do a random split. How do I do this in order to pass the first data frame into train and the second to test?

pault
  • 41,343
  • 15
  • 107
  • 149
Rohit
  • 5,840
  • 13
  • 42
  • 65
  • 2
    I would really appreciate the reason for negative score to the question. Such a feedback will greatly help me and others. – Rohit Aug 09 '18 at 18:40
  • I am not the downvoter, but this question is somewhat broad. It would have been helpful if you provided a [mcve] and showed exactly what your desired output was. Here's a post that shows [how to create good reproducible apache spark dataframe examples](https://stackoverflow.com/questions/48427185/how-to-make-good-reproducible-apache-spark-dataframe-examples). – pault Aug 09 '18 at 18:53

1 Answers1

24

You can use pyspark.sql.functions.percent_rank() to get the percentile ranking of your DataFrame ordered by the timestamp/date column. Then pick all the columns with a rank <= 0.8 as your training set and the rest as your test set.

For example, if you had the following DataFrame:

df.show(truncate=False)
#+---------------------+---+
#|date                 |x  |
#+---------------------+---+
#|2018-01-01 00:00:00.0|0  |
#|2018-01-02 00:00:00.0|1  |
#|2018-01-03 00:00:00.0|2  |
#|2018-01-04 00:00:00.0|3  |
#|2018-01-05 00:00:00.0|4  |
#+---------------------+---+

You'd want the first 4 rows in your training set and the last one in your training set. First add a column rank:

from pyspark.sql.functions import percent_rank
from pyspark.sql import Window

df = df.withColumn("rank", percent_rank().over(Window.partitionBy().orderBy("date")))

Now use rank to split your data into train and test:

train_df = df.where("rank <= .8").drop("rank")
train_df.show()
#+---------------------+---+
#|date                 |x  |
#+---------------------+---+
#|2018-01-01 00:00:00.0|0  |
#|2018-01-02 00:00:00.0|1  |
#|2018-01-03 00:00:00.0|2  |
#|2018-01-04 00:00:00.0|3  |
#+---------------------+---+

test_df = df.where("rank > .8").drop("rank")
test_df.show()
#+---------------------+---+
#|date                 |x  |
#+---------------------+---+
#|2018-01-05 00:00:00.0|4  |
#+---------------------+---+
dataista
  • 3,187
  • 1
  • 16
  • 23
pault
  • 41,343
  • 15
  • 107
  • 149
  • @pault Can we use `row_number ` function for this kind of problem – User12345 Aug 09 '18 at 19:47
  • @User12345 you can, but it would require also knowing the number of rows. – pault Aug 09 '18 at 19:54
  • @pault Wouldn't the .count() tell you the number of rows? – Rohit Aug 09 '18 at 20:54
  • Yes, but you'd have to run the `count()` first which requires going over the whole DataFrame once. Then you'd have to assign a `row_number()` (expensive) before filtering. Using `percent_rank()` should be more efficient. – pault Aug 09 '18 at 21:16
  • @pault is there an approach to performing the split so this warning does not occur? "WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation" – thePurplePython Feb 19 '20 at 16:42
  • Is there a way to achieve this without using window functions and without knowing size of dataframe? In my problem, I don't have any column to partition on and hence using window functions cause serious performance issue. – drp Jun 17 '20 at 21:01
  • @pault did you manage to come across a solution because I'm having the same problem. – Barry Allen May 06 '23 at 23:03