I just got rejected with the reason ” The analysis is incorrect wasn’t specific about the issue”
Assessment:
DataFlowIQ Problem Overview: Assume you are a Customer Engineer of DataFlowIQ company, where you are responsible for Customer Engagement & Data Engineering of DataFlowIQ’s Customers and stakeholders.
Johnny’s Jeans is a fashion start-up and a new customer of DataFlowIQ. The first campaign they want to create on our platform is a Thank-You message to all their first customers.
They want to target their first 10,000 customers and send them some stats about their first year of sales. The Johnny’s Jeans finance team reported that the first 10,000 customers spent a collective $33,904,525 on their site.
However when the marketing team ran the sum of total revenue on our application DataFlowIQ platform, the total sum was a bit less at $29,162,044. The Johnny’s Jeans marketing team was concerned about this data discrepancy and has reached out to DataFlowIQ. Background of our integration with Johnny’s Jeans:
- Every few weeks, Johnny’s Jeans data warehouse
does an export of their most recent rows from their
user-summary table. This data is encrypted and dropped
into an AWS S3 bucket (for this exercise the files are
available on below Google Drive link).
https://drive.google.com/drive/folders/1y07q1MOU0zQ
gedXgCLN1hnotnN8e15q2?usp=drive_link
- DataFlowIQ’s server is designed to ping that S3
bucket every 24 hours and when a new file is dropped,
DataFlowIQ’s system is supposed to retrieve the new file
for decryption.
- The decrypted user-summary data is in a CSV with
these fields:
a. user_id
b. Total_spend (total this user has spent at Johnny’s Jeans)
c. count_saved_items (total amount of bookmarked items)
d. loyalty_credits (amount of loyalty points the user can spend)
e. batch_id
-
DataFlowIQ’s data pipeline is designed to read through all the data files and de-duplicate the data by choosing the largest batch_id for each user. This ensures that DataFlowIQ’s should only be loading 1 row per user.
-
Once that data is de-duplicated, it is loaded into DataFlowIQ’s platform and is ready for users to start running queries.
Task:
Analyze the data files provided in the google drive link above. Figure out best you can to find what the problem is. Bear in mind that just like any software engineering effort, DataFlowIQ is not immune to bugs and things may not work perfectly as expected.
-
You are writing an email to the DataFlowIQ backend engineer describing what you’ve found and what you think the issue here is. Please provide as much information and/or examples as possible.
-
Michael Greene, Johnny Jean’s email marketing director, emailed us asking why our data doesn’t seem to be matching up with theirs. Write a response for him.
Task Notes:
- We have provided the 5 of the decrypted files that were sent to us by Johnny’s Jeans.
a. Delta0.csv was their initial base set of data.
b. Deltas 1-4 were their update files.
- We have also sent you DataFlowIQ’s version of their user-summary table (DFIQ-user-summary.csv). Johnny Jeans’ has exported the table on their end and we’ve sent you that as well (Johnny-user-summary.csv).
File Upload:
Analyze the data and as per the Task 1 & 2, draft 2 email content based on your findings and create a PDF file. If you think an attachment of your analysis is relevant to support your content on the email draft, create those files. Create one single ZIP file and upload to complete this task assessment.
Link to my submission: https://drive.google.com/file/d/145D2VlsD-peF1GoHswvPcHmx1ULzENfJ/view?usp=drivesdk
Grecko is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.
1