1

I am using spark-sql-2.4.1v. Here I have scenario like below

val df = Seq(
  (2010,"2018-11-24",71285,"USA","0.9192019",  "0.1992019",  "0.9955999"),
  (2010,"2017-08-24",71286,"USA","0.9292018",  "0.2992019",  "0.99662018"),
  (2010,"2019-02-24",71287,"USA","0.9392017",  "0.3992019",  "0.99772000")).toDF("seq_id","load_date","company_id","country_code","item1_value","item2_value","item3_value")
.withColumn("item1_value", $"item1_value".cast(DoubleType))
.withColumn("item2_value", $"item2_value".cast(DoubleType))
.withColumn("item3_value", $"item3_value".cast(DoubleType))
.withColumn("fiscal_year", year(col("load_date")).cast(IntegerType))
.withColumn("fiscal_quarter", quarter(col("load_date")).cast(IntegerType))

df.show()


val aggregateColumns = Seq("item1_value","item2_value","item3_value")
var aggDFs = aggregateColumns.map( c => {
    df.groupBy("country_code").agg(lit(c).as("col_name"),sum(c).as("sum_of_column"))
})


var combinedDF = aggDFs.reduce(_ union _)
combinedDF.show

Output data i am getting like

|country_code|   col_name|     sum_of_column|


|         USA|item1_value|         2.7876054|
|         USA|item2_value|         0.8976057|
|         USA|item3_value|2.9899400800000002|

I need to get other column in the out put i.e. "seq_id","load_date" and "company_id" How to get them after aggregation operation of dataframe ?

Jacek Laskowski
  • 72,696
  • 27
  • 242
  • 420
BdEngineer
  • 2,929
  • 4
  • 49
  • 85

2 Answers2

2

You can use Window functions to show non-aggregated columns or can say showing sum in each row.

Please see below code snippet if it helps:

import org.apache.spark.sql.expressions.Window

val df = Seq(
  (2010,"2018-11-24",71285,"USA","0.9192019",  "0.1992019",  "0.9955999"),
  (2010,"2017-08-24",71286,"USA","0.9292018",  "0.2992019",  "0.99662018"),
  (2010,"2019-02-24",71287,"USA","0.9392017",  "0.3992019",  "0.99772000")).
  toDF("seq_id","load_date","company_id","country_code","item1_value","item2_value","item3_value").
  withColumn("item1_value", $"item1_value".cast(DoubleType)).
  withColumn("item2_value", $"item2_value".cast(DoubleType)).
  withColumn("item3_value", $"item3_value".cast(DoubleType)).
  withColumn("fiscal_year", year(col("load_date")).cast(IntegerType)).
  withColumn("fiscal_quarter", quarter(col("load_date")).cast(IntegerType))


val byCountry = Window.partitionBy(col("country_code"))

val aggregateColumns = Seq("item1_value","item2_value","item3_value")
var aggDFs = aggregateColumns.map( c => {
    df.withColumn("col_name",lit(c)).withColumn("sum_country", sum(c) over byCountry)
})

var combinedDF = aggDFs.reduce(_ union _)

combinedDF.
select("seq_id","load_date","company_id","country_code","col_name","sum_country").
distinct.show(100,false)

Output would be like this:

+------+----------+----------+------------+-----------+------------------+
|seq_id|load_date |company_id|country_code|col_name   |sum_country       |
+------+----------+----------+------------+-----------+------------------+
|2010  |2019-02-24|71287     |USA         |item1_value|2.7876054         |
|2010  |2018-11-24|71285     |USA         |item1_value|2.7876054         |
|2010  |2017-08-24|71286     |USA         |item1_value|2.7876054         |
|2010  |2018-11-24|71285     |USA         |item2_value|0.8976057000000001|
|2010  |2019-02-24|71287     |USA         |item2_value|0.8976057000000001|
|2010  |2017-08-24|71286     |USA         |item2_value|0.8976057000000001|
|2010  |2019-02-24|71287     |USA         |item3_value|2.9899400800000002|
|2010  |2018-11-24|71285     |USA         |item3_value|2.9899400800000002|
|2010  |2017-08-24|71286     |USA         |item3_value|2.9899400800000002|
+------+----------+----------+------------+-----------+------------------+
Ramdev Sharma
  • 974
  • 1
  • 12
  • 17
1

Replace your code with below code snippet,

scala> val W = Window.partitionBy("country_code")
scala> val aggDFs = aggregateColumns.map( c => {
     | df.withColumn("col_name", lit(c)).withColumn("sum_of_column" ,sum(c).over(W)).select("seq_id","load_date", "company_id","col_name","sum_of_column")
     | })

scala> val combinedDF = aggDFs.reduce(_ union _)
scala> combinedDF.show()
+------+----------+----------+-----------+------------------+                   
|seq_id| load_date|company_id|   col_name|     sum_of_column|
+------+----------+----------+-----------+------------------+
|  2010|2018-11-24|     71285|item1_value|         2.7876054|
|  2010|2017-08-24|     71286|item1_value|         2.7876054|
|  2010|2019-02-24|     71287|item1_value|         2.7876054|
|  2010|2018-11-24|     71285|item2_value|         0.8976057|
|  2010|2017-08-24|     71286|item2_value|         0.8976057|
|  2010|2019-02-24|     71287|item2_value|         0.8976057|
|  2010|2018-11-24|     71285|item3_value|2.9899400800000002|
|  2010|2017-08-24|     71286|item3_value|2.9899400800000002|
|  2010|2019-02-24|     71287|item3_value|2.9899400800000002|
+------+----------+----------+-----------+------------------+
Nikhil Suthar
  • 2,289
  • 1
  • 6
  • 24