I have two PySpark DataFrames, df1
and df2
, containing user information over 12 months in 2023. df1
contains the user ID and composite score for each month, while df2
contains the user ID and their salary for each month.
I’m trying to identify the three consecutive months where both income and composite score were decreasing for each user. Specifically, I want to find cases where income and composite score decreased in the same three months.
I’ve already attempted to calculate the differences between consecutive months’ income and composite scores, determine consecutive decreases, and filter the Data Frames accordingly. However, I’m having trouble getting the correct results.
Here’s the approach I’ve tried so far:
from pyspark.sql import SparkSession
from pyspark.sql.window import Window
from pyspark.sql.functions import lag, col, when, sum as spark_sum
# Initialize Spark session
spark = SparkSession.builder
.appName("Consecutive Decrease Analysis")
.getOrCreate()
# Assuming your DataFrames are named 'df1' and 'df2'
# Define the window specification for both DataFrames
windowSpec1 = Window.partitionBy("user_id").orderBy("month")
windowSpec2 = Window.partitionBy("user_id").orderBy("month")
# Calculate the differences between consecutive months' income and composite scores for both DataFrames
df1 = df1.withColumn("prev_income", lag(col("salary")).over(windowSpec1))
df1 = df1.withColumn("income_diff", col("salary") - col("prev_income"))
df2 = df2.withColumn("prev_score", lag(col("comp_score")).over(windowSpec2))
df2 = df2.withColumn("score_diff", col("comp_score") - col("prev_score"))
# Define the number of consecutive months for the analysis
num_consecutive_months = 3
# Calculate flags indicating if income and score decreased or not for both DataFrames
df1 = df1.withColumn("income_decrease_flag", when(col("income_diff") < 0, 1).otherwise(0))
df2 = df2.withColumn("score_decrease_flag", when(col("score_diff") < 0, 1).otherwise(0))
# Use window functions to count consecutive decreases for both income and score
df1 = df1.withColumn("consecutive_income_decreases",
spark_sum(col("income_decrease_flag")).over(windowSpec1.rowsBetween(-num_consecutive_months + 1, Window.currentRow)))
df2 = df2.withColumn("consecutive_score_decreases",
spark_sum(col("score_decrease_flag")).over(windowSpec2.rowsBetween(-num_consecutive_months + 1, Window.currentRow)))
# Filter the DataFrame to select users with three consecutive months of both income and score decrease
consecutive_decrease_users = df1.filter((col("consecutive_income_decreases") == num_consecutive_months - 1) &
(col("consecutive_score_decreases") == num_consecutive_months - 1))
.select("user_id", "month")
# Show the users and months where both income and score decreased consecutively
consecutive_decrease_users.show()