Assessment Item 1: Report – Statistical Analysis of Business Data
Overview
Assessment tasks | |||||
Assessment ID | Assessment Item | When due | Weighting | ULO# |
CLO# for MITS |
1 |
Report – Statistical Analysis of Business Data(Individual) (1000 Words) |
Session 6 |
30% |
1, 2 |
1, 2 |
Objective
This assessment item relates to the unit learning outcomes as in the unit descriptor. This assessment is designed to give students experience in analyzing a suitable dataset and creating different visualizations in dashboard and to improve student presentation skills relevant to the Unit of Study subject matter.
Case Study:
You are a data scientist hired by a retail company, “SmartMart,” which operates a chain of grocery stores. SmartMart has been in the market for several years and has a significant customer base. However, the company is facing challenges in optimizing its operations and maximizing profits. As a data scientist, your task is to analyze the provided dataset and identify areas where data science techniques can be applied to create business value for SmartMart.
Dataset:
You’ll need to use the below python code to generate your own artificial dataset. The dataset provided contains information on SmartMart’s sales transactions over the past year. It includes data such as:
- Date and time of each transaction
- Customer ID
- Product ID
- Quantity sold
- Unit price
- Total transaction amount
- Store ID
Tasks:
You are tasked to apply appropriate statistical analysis techniques to extract valuable information from the dataset. This may include but is not limited to:
- Descriptive statistics
- Correlation analysis
- Hypothesis testing
- Time-series analysis
You need to:
- Identify key findings and insights from your analysis that can help SmartMart make data-driven decisions to optimize its operations and increase profitability.
- Present your analysis results in a clear and concise manner, including visualizations and explanations where necessary.
- Provide recommendations on specific strategies or actions that SmartMart can take based on your analysis.
Deliverables:
- You need to submit one report (1000 +/- 10% words) in PDF format, documenting your analysis process, findings, and recommendations containing Python code/scripts used for data analysis, along with comments explaining the code logic and methodology and relevant Visualizations (e.g., plots, charts) supporting your analysis and findings.
Note:
Please submit a single pdf report that includes screenshots of Python code along with corresponding results, as well as screenshots of visualizations
relevant/supporting your analysis. The screenshot of your python code must include your student ID and name clearly visible in the beginning of the script. You can also export the dataset as .csv file and use other software like Ms Excel for the same analysis.
Report Structure (suggestive)
- Executive summary
- Introduction
- Analysis Process and Methodology
- Findings and insights
- Recommendations
- Code screenshots
- Conclusion
- Appendix (optional
Dataset:
Use the below python code to generate dataset with 1000 rows and following 7 columns (Customer ID, Product ID, Quantity sold, Unit price, Total transaction amount, Store ID). You can use any python editor to generate the dataset.
ICT603 Data Science
Assessment 1 – Dataset generating code
Instruction- You can use this code to create your own file and
analyse the given dataset. You MUST use the given in assessment guideline to create the dataset. Refer to assessment details for more instructions.
Student ID – < ********>
Name – < ******************>
campus – < *******> Subject code – < ICT 603>
Assessment no – < Assessment 1> “””
import pandas as pd import numpy as np import random
from datetime import datetime, timedelta
# Generate 1000 random dates and times within a specific range start_date = datetime(2023, 1, 1)
end_date = datetime(2023, 12, 31)
date_times = [start_date + timedelta(seconds=random.randint(0, int((end_date – start_date).total_seconds()))) for _ in range(1000)] # Generate random customer IDs
customer_ids = [‘C’ + str(i).zfill(4) for i in range(1, 1001)]
# Generate random product IDs
product_ids = [‘P’ + str(i).zfill(3) for i in range(1, 101)] # Generate random quantities sold
quantities_sold = np.random.randint(1, 10, size=1000) # Generate random unit prices
unit_prices = np.random.uniform(1, 100, size=1000) # Calculate total transaction amounts
total_transaction_amounts = quantities_sold * unit_prices # Generate random store IDs
store_ids = [‘S’ + str(i).zfill(3) for i in range(1, 11)] # Randomly assign store IDs to transactions
store_ids = [random.choice(store_ids) for _ in range(1000)] # Create DataFrame
data = {
‘Date & Time’: date_times,
‘Customer ID’: random.choices(customer_ids, k=1000), ‘Product ID’: random.choices(product_ids, k=1000), ‘Quantity Sold’: quantities_sold,
‘Unit Price’: unit_prices,
‘Total Transaction Amount’: total_transaction_amounts, ‘Store ID’: store_ids
}
df = pd.DataFrame(data)
# Convert Date & Time column to datetime format df[‘Date & Time’] = pd.to_datetime(df[‘Date & Time’]) # Sort DataFrame by Date & Time
df = df.sort_values(by=’Date & Time’) # Reset index
df.reset_index(drop=True, inplace=True) # Print DataFrame
print(df)
df.to_csv(“ICT603_A1.csv”)
###################################################
Submission Instructions
All submissions are to be submitted through the assignment 1 Drop-boxes that will be set up in the Moodle account for this Unit of Study. Assignments not submitted through these drop boxes will not be considered. Submissions must be made by the due date and time (which will be in the session detailed above) and determined by your Unit coordinator
Note: All work is due by the due date and time. Late submissions will be penalized at 20% of the assessment final grade per day, including weekends.
Marking Criteria/Rubric
|
You will be assessed on the following marking criteria/Rubric: Total Marks: 30
Findings and Insights |
Identifies key findings and insights with exceptional clarity and depth, providing valuable and actionable insights for SmartMart’s decisionmaking process. |
Presents clear and insightful findings, demonstrating a strong understanding of the dataset and its implications for SmartMart’s operations. |
Identifies basic findings and insights, but may lack depth or clarity in analysis, resulting in somewhat limited actionable insights. |
Presents limited findings and insights, withsome relevance to SmartMart’s operations, but lacks depth or clear connections to the dataset. |
Fails to identify meaningful findings or insights, with little relevance to SmartMart’s operations. |
Presentation and Clarity |
The reportis exceptionally clear, well-organized, and effectively communicates the analysis results and recommendations. Visualizationsare highly effective and support the analysis. |
The report is wellstructured and effectively communicates the analysis results and recommendations. Visualizationsare clear and relevant |
The report is adequately structured and communicates the analysis results and recommendations with some clarity. Visualizations may be somewhat unclear or lacking in relevance. |
The report lacks clear structure and may be difficult to follow. Communication of analysis results and recommendations is somewhat uncleaar. Visualisations are limited or ineffective |
The report is poorly structured and difficult to follow. Communication of analysis results and recommendation s is unclear or absent. Visualizations are missing or irrelevant. |
Python Code/Scripts |
Python code/scripts are welldocumented, clear, and demonstrate advanced proficiency in data analysis techniques. Comments thoroughly explaincode logic and methodology. |
Python code/scripts are well-structured and demonstrate proficiency in data analysis techniques. Comments provide adequate explanations of code logic and methodology. |
Python code/scripts are adequately structured and demonstrate basic proficiency in data analysis techniques. Comments may lack depth or clarity in explaining codelogic and methodology. |
Python code/scripts are somewhat disorganized or lack clarity in structure. Demonstrates limited proficiency in data analysis techniques. Comments may be sparse or unclear. |
Python code/scripts are poorly structured or lack clarity. Demonstrates minimal proficiency in data analysis techniques. Comments are absent or insufficient. |
Recommendations |
Provides detailed and actionable recommendations based on the analysis findings, demonstrating a deep understanding of SmartMart’s business needs and potential strategies for improvement. |
Offers clearand relevant recommendations based on the analysis findings, addressing SmartMart’s business needs and suggesting potential strategies for improvement. |
Provides basic recommendationsbased on the analysis findings, but may lack depth or specificity in addressing SmartMart’s businessneeds. |
Offers limited recommendations based on the analysis findings, with minimal relevance to SmartMart’s business needsor strategies for improvement. |
Fails to provide meaningful recommendation s based on the analysis findings, with little relevance to SmartMart’s business needs or strategies for improvement. |
Assessment Item 2: Data Acquisition and Data Mining (Group) Part A – Report and
Part B- Oral Presentation
Overview
Assessment tasks | |||||
Assessment ID | Assessment Item | When due | Weighting | ULO# | CLO# for MITS |
2 |
Data Acquisition and Data Mining(Group) Part A – Report (1000 Words) Part B – Presentations |
Part A – Session 9 Part B – Session 10 | Part A – 20% Part B – 10% Total – 30% |
1, 3 ,4 |
1, 2, 3 |
Assignment Overview:
In this assignment, you will work in a group of 3 to 5 students to conduct an Exploratory Data Analysis (EDA) on a comprehensive dataset. The dataset can be acquired from internal or external sources, or by merging both. You will utilize appropriate techniques, tools, and programming languages, such as Python, to perform various data procedures including data acquisition, data wrangling, and data mining to extract meaningful insights from the dataset. The final deliverables will include an EDA report and an oral presentation video to showcase your findings and analysis.
Assignment Tasks:
- Data Acquisition:
- Identify and acquire a comprehensive dataset suitable for the EDA. You can choose from the suggested data sources provided or explore and select different datasets based on your group’s common interest.
- Ensure the dataset is relevant, sufficiently large, and contains multiple variables for thorough analysis.
Example Data Sources:
- Kaggle Datasets (https://www.kaggle.com/datasets)
- UCI Machine Learning Repository (https://archive.ics.uci.edu/ml/index.php)
- Government Open Data Portals (e.g., data.gov)
- Academic Research Databases (e.g., PubMed, IEEE Xplore)
- Social Media APIs (e.g., Twitter, Facebook)
- Data Wrangling:
- Preprocess the acquired dataset to handle missing values, outliers, and inconsistencies.
- Perform data cleaning tasks such as removing duplicates, standardizing formats, and transforming variables if necessary.
- Explore methods to handle categorical variables and convert them into a suitable format for analysis.
Note: It is mandatory that Data Wrangling operations should be incorporate in the dataset.
- Data Exploration:
- Conduct initial data exploration to understand the structure, distributions, and relationships within the dataset.
- Utilize descriptive statistics and visualization techniques (e.g., histograms, box plots, scatter plots) to gain insights into individual variables and their interactions.
- Identify any patterns, trends, or anomalies present in the data.
- Data Mining and Analysis:
- Apply appropriate data mining techniques such as clustering, classification, or regression to uncover deeper insights within the dataset.
- Utilize machine learning algorithms if applicable to predict or classify certain outcomes based on the available variables.
- Perform feature engineering if necessary to enhance the predictive power of the model.
- EDA Report:
- Compile all findings, analysis, and visualizations into a comprehensive EDA report.
- Structure the report to include an introduction, methodology, results, discussion, and conclusion sections.
- Provide clear explanations for the steps taken, insights gained, and any challenges encountered during the analysis.
- Include visualizations and summary statistics to support your findings.
- Oral Presentation:
- Prepare a concise oral presentation to present your EDA findings to the class.
- Highlight key insights, trends, and interesting observations discovered during the analysis.
- Use visual aids such as slides or interactive dashboards to enhance the presentation.
Submission Guidelines:
- The EDA report of 1000 words must be submitted digitally, either in PDF or Word document format. The report should include an appendix at the end containing screenshots of the Python code along with its corresponding output
- The oral presentation can be delivered using presentation software (e.g., PowerPoint, Google Slides).
- Ensure proper citation and referencing for any external sources or datasets used.
- Please submit two files, the Report and the Oral Presentation, through the link provided in the LMS before the specified deadline.
Note: Collaboration within the group is encouraged, but each group member must contribute substantially to the analysis, report writing, and presentation. Plagiarism or unauthorized use of external sources will result in penalties.
Submission Instructions
All submissions are to be submitted through turn-it-in. Drop-boxes linked to turn-it-in will be set up in the Unit of Study Moodle account. Assignments not submitted through these drop-boxes will not be considered.
Submissions must be made by the due date and time (which will be in the session detailed above) and determined by your Unit coordinator. Submissions made after the due date and time will be penalized at the rate of 20% per day (including weekend days).
The turn-it-in similarity score will be used in determining the level if any of plagiarism. Turn-it-in will check conference websites, Journal articles, the Web and your own class member submissions for plagiarism. You can see your turn-it-in similarity score when you submit your assignment to the appropriate drop-box. If this is a concern you will have a chance to change your assignment and re-submit. However, re-submission is only allowed prior to the submission due date and time. After the due date and time have elapsed you cannot make re-submissions and you will have to live with the similarity score as there will be no chance for changing. Thus, plan early and submit early to take advantage of this feature. You can make multiple submissions, but please remember we only see the last submission, and the date and time you submitted will be taken from that submission.
Your document should be a single word or pdf document containing your report
Note: All work is due by the due date and time. Late submissions will be penalized at 20% of the assessment final grade per day, including weekends.
Marking Criteria/Rubric
You will be assessed on the following marking criteria/Rubric:
Total Marks: 30
Assessment criteria |
Professional (80%- 100%) |
Very Good(70%-79%) | Good (60%-69%) |
Satisfactory (50%- 59%) |
Unsatisfactory (0%- 49%) |
Data Acquisition |
Group acquires a highly relevant and comprehensive dataset from a diverse rangeof sources, ensuring it contains multiple variables for thorough analysis. |
Group acquires a relevant dataset with multiple variables suitable for analysis, demonstrating good selection from suggested or alternative sources. |
Group acquires a dataset, but it may lack depth or relevance in some areas,or may not contain a sufficient number of variables for thorough analysis. |
Group acquires a dataset, but it may lack relevance or contain limited variables for analysis. |
Group fails to acquire an appropriate dataset, lacking relevance, depth, or variables necessary for analysis. |
Data Wrangling |
Comprehensive data wrangling techniques are applied effectively, addressing missing values, outliers, inconsistencies, and categorical variables. Operations are welldocumented and integrated seamlessly into the dataset. |
Data wrangling operations are performed proficiently, addressing most missing values, outliers, inconsistencies, and categorical variables, with adequate documentation. |
Data wrangling operations are attempted but may lack completeness or documentation, with some issuesremaining unresolved. |
Data wrangling efforts are minimal, leaving significant issues unaddressed, with little to no documentation provided. |
Little to no attempt is made to perform data wrangling operations, resulting in unresolved issues and inconsistencies in the dataset. |
Data Exploration |
Extensive data exploration is conducted, utilizing a wide range of descriptive statistics and visualization techniques effectively to gain deep insights into the dataset’s structure, distributions, and relationships. Patterns, trends, and anomalies are identified comprehensively. |
Data exploration is conducted proficiently, utilizing descriptive statistics and visualization techniques to gaininsights into the dataset’s structure, distributions, and relationships. Some patterns, trends, and anomalies are identified. |
Basic dataexploration is conducted, withlimited utilization of descriptive statistics and visualization techniques to understand the dataset’s structure, distributions, and relationships. Some patterns or trendsmay be overlooked. |
Limited data exploration is conducted, with minimal use of descriptive statistics and visualization techniques, resulting in shallow insights into the dataset’s structure, distributions, and relationships. Important patterns or trends may be missed. |
Little to no data exploration is conducted, resulting in a lack of understanding of the dataset’s structure, distributions, and relationships. Important patterns or trends are not identified. |
Data Mining and Analysis |
Advanced data mining techniques are applied effectively, utilizing appropriate algorithms to uncoverdeep insights within the dataset. Machine learning algorithms are implemented where applicable, demonstrating advanced analytical skills. Feature engineering, if necessary, is performed proficiently to enhance the predictive power of the model. |
Data mining techniques are applied proficiently, utilizing appropriate algorithms to uncoverinsights within the dataset. Machine learning algorithms may be applied with moderate success, demonstrating solid analytical skills. Some attempts at feature engineering may be made. |
Basic data mining techniques are applied, but with limited effectiveness in uncovering insights within the dataset. Machine learning algorithms, if applied, may lack sophistication, with minimal attempts at feature engineering. |
Limited datamining techniques are applied, with little effectiveness in uncovering insights within the dataset. Machine learning algorithms, if applied, are rudimentary, with no attempts at feature engineer |