This week our focus is on data mining. In
- Week 1 DiscussionThis week our focus is on data mining. In the article this week, we focus on deciding whether the results of two different data mining algorithms provides significantly different information. Therefore, answer the following questions:
- When using different data algorithms, why is it fundamentally important to understand why they are being used?
- If there are significant differences in the data output, how can this happen and why is it important to note the differences?
- Who should determine which algorithm is “right” and the one to keep? Why?
- Requirements:
- Students must not copy and post from sources. When referencing sources, students must rephrase all work from author’s and include in-text citations and references in APA format.
- Students must post their initial post by Thursday evening at 11:59 pm ET and have two total days of engagement (the first day of engagement must answer the initial post and then at least one more additional day of engagement with peers). All posts must be answered by Sunday at 11:59 pm ET.
- The initial discussion board posts must be from 100-150 words.
- Peer responses must be 50-100 words.
- The content must also not be from the textbook.
- Peer responses must be substantive in nature.
- Build on something your classmate said.
- Explain why and how you see things differently.
- Ask a probing or clarifying question.
- Share an insight from having read your classmate's posting.
- Offer and support an opinion.
- Expand on your classmate's posting.
- Peer responses that are “a good job” or “I agree” do not count as substantive post.
- Week 1 HomeworkThis week we focus on the introductory chapter in which we review data mining and the key components of data mining. In below format answer the following questions:
- What is knowledge discovery in databases (KDD)?
- Review section 1.2 and review the various motivating challenges. Select one and note what it is and why it is a challenge.
- Note how data mining integrates with the components of statistics and AL, ML, and Pattern Recognition.
- Note the difference between predictive and descriptive tasks and the importance of each.
- In an APA7 formatted answer all questions above. There should be headings to each of the questions above as well. Ensure there are at least two-peer reviewed sources to support your work. The paper should be at least two pages of content (this does not include the cover page or reference page).
Data Mining: Introduction
Lecture Notes for Chapter 1
Introduction to Data Mining
by
Tan, Steinbach, Kumar
- Lots of data is being collected
and warehoused - Web data, e-commerce
- purchases at department/
grocery stores - Bank/Credit Card
transactions - Computers have become cheaper and more powerful
- Competitive Pressure is Strong
- Provide better, customized services for an edge (e.g. in Customer Relationship Management)
Why Mine Data? Commercial Viewpoint
Why Mine Data? Scientific Viewpoint
- Data collected and stored at
enormous speeds (GB/hour) - remote sensors on a satellite
- telescopes scanning the skies
- microarrays generating gene
expression data - scientific simulations
generating terabytes of data - Traditional techniques infeasible for raw data
- Data mining may help scientists
- in classifying and segmenting data
- in Hypothesis Formation
Mining Large Data Sets – Motivation
- There is often information “hidden” in the data that is
not readily evident - Human analysts may take weeks to discover useful information
- Much of the data is never analyzed at all
The Data Gap
Total new disk (TB) since 1995
Number of analysts
From: R. Grossman, C. Kamath, V. Kumar, “Data Mining for Scientific and Engineering Applications”
disks
Units | Capacity PBs | |
1995 | 89,054 | 104.8 |
1996 | 105,686 | 183.9 |
1997 | 129,281 | 343.63 |
1998 | 143,649 | 724.36 |
1999 | 165,857 | 1394.6 |
2000 | 187,835 | 2553.7 |
2001 | 212,800 | 4641 |
2002 | 239,138 | 8119 |
2003 | 268,227 | 13027 |
1995 | 104.8 | |
1996 | 183.9 | |
1997 | 343.63 | |
1998 | 724.36 | |
1999 | 1394.6 | |
2000 | 2553.7 | |
2001 | 4641 | |
2002 | 8119 | |
2003 | 13027 |
disks
0 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
0 |
chart data gap
1995 | 1995 |
1996 | 1996 |
1997 | 1997 |
1998 | 1998 |
1999 | 1999 |
chart data gap 2
1995 | 1995 |
1996 | 1996 |
1997 | 1997 |
1998 | 1998 |
1999 | 1999 |
data gap
Ph.D. | Petabytes | Terabytes | Total TBs | PBs | ||||||
1995 | 105.7 | 105700 | 105700 | 105.7 | ||||||
1996 | 227.4 | 227400 | 333100 | 333.1 | ||||||
1997 | 425.33 | 425330 | 758430 | 758.43 | ||||||
1998 | 891.97 | 891970 | 1650400 | 1650.4 | ||||||
1999 | 1727 | 1727000 | 3377400 | 3377.4 | ||||||
2000 | 5792 | 5792000 | 9169400 | 9169.4 | ||||||
1990 | 1991 | 1992 | 1993 | 1994 | 1995 | 1996 | 1997 | 1998 | 1999 | |
Science and engineering Ph.D.s, total | 22,868 | 24,023 | 24,675 | 25,443 | 26,205 | 26,535 | 27,229 | 27,245 | 27,309 | 25,953 |
105700 | 333100 | 758430 | 1650400 | 3377400 | ||||||
105700 | 333100 | 758430 | 1650400 | 3377400 |
Sheet3
What is Data Mining?
- Many Definitions
- Non-trivial extraction of implicit, previously unknown and potentially useful information from data
- Exploration & analysis, by automatic or
semi-automatic means, of
large quantities of data
in order to discover
meaningful patterns
What is (not) Data Mining?
- What is Data Mining?
- Certain names are more prevalent in certain US locations (O’Brien, O’Rurke, O’Reilly… in Boston area)
- Group together similar documents returned by search engine according to their context (e.g. Amazon rainforest, Amazon.com,)
- What is not Data Mining?
- Look up phone number in phone directory
- Query a Web search engine for information about “Amazon”
- Draws ideas from machine learning/AI, pattern recognition, statistics, and database systems
- Traditional Techniques
may be unsuitable due to - Enormity of data
- High dimensionality
of data - Heterogeneous,
distributed nature
of data
Origins of Data Mining
Machine Learning/
Pattern
Recognition
Statistics/
AI
Data Mining
Database systems
Data Mining Tasks
- Prediction Methods
- Use some variables to predict unknown or future values of other variables.
- Description Methods
- Find human-interpretable patterns that describe the data.
From [Fayyad, et.al.] Advances in Knowledge Discovery and Data Mining, 1996
Data Mining Tasks…
- Classification [Predictive]
- Clustering [Descriptive]
- Association Rule Discovery [Descriptive]
- Sequential Pattern Discovery [Descriptive]
- Regression [Predictive]
- Deviation Detection [Predictive]
Classification: Definition
- Given a collection of records (training set )
- Each record contains a set of attributes, one of the attributes is the class.
- Find a model for class attribute as a function of the values of other attributes.
- Goal: previously unseen records should be assigned a class as accurately as possible.
- A test set is used to determine the accuracy of the model. Usually, the given data set is divided into training and test sets, with training set used to build the model and test set used to validate it.
Classification Example
categorical
categorical
continuous
class
Training
Set
Learn
Classifier
Test
Set
Model
Tid
Refund
Marital
Status
Taxable
Income
Cheat
1
Yes
Single
125K
No
2
No
Married
100K
No
3
No
Single
70K
No
4
Yes
Married
120K
No
5
No
Divorced
95K
Yes
6
No
Married
60K
No
7
Yes
Divorced
220K
No
8
No
Single
85K
Yes
9
No
Married
75K
No
10
No
Single
90K
Yes
10
Refund
Marital
Status
Taxable
Income
Cheat
No
Single
75K
?
Yes
Married
50K
?
No
Married
150K
?
Yes
Divorced
90K
?
No
Single
40K
?
No
Married
80K
?
10
Classification: Application 1
- Direct Marketing
- Goal: Reduce cost of mailing by targeting a set of consumers likely to buy a new cell-phone product.
- Approach:
- Use the data for a similar product introduced before.
- We know which customers decided to buy and which decided otherwise. This {buy, don’t buy} decision forms the class attribute.
- Collect various demographic, lifestyle, and company-interaction related information about all such customers.
Type of business, where they stay, how much they earn, etc.
- Use this information as input attributes to learn a classifier model.
From [Berry & Linoff] Data Mining Techniques, 1997
Classification: Application 2
- Fraud Detection
- Goal: Predict fraudulent cases in credit card transactions.
- Approach:
- Use credit card transactions and the information on its account-holder as attributes.
When does a customer buy, what does he buy, how often he pays on time, etc
- Label past transactions as fraud or fair transactions. This forms the class attribute.
- Learn a model for the class of the transactions.
- Use this model to detect fraud by observing credit card transactions on an account.
Classification: Application 3
- Customer Attrition/Churn:
- Goal: To predict whether a customer is likely to be lost to a competitor.
- Approach:
- Use detailed record of transactions with each of the past and present customers, to find attributes.
How often the customer calls, where he calls, what time-of-the day he calls most, his financial status, marital status, etc.
- Label the customers as loyal or disloyal.
- Find a model for loyalty.
From [Berry & Linoff] Data Mining Techniques, 1997
Classification: Application 4
- Sky Survey Cataloging
- Goal: To predict class (star or galaxy) of sky objects, especially visually faint ones, based on the telescopic survey images (from Palomar Observatory).
3000 images with 23,040 x 23,040 pixels per image.
- Approach:
- Segment the image.
- Measure image attributes (features) – 40 of them per object.
- Model the class based on these features.
- Success Story: Could find 16 new high red-shift quasars, some of the farthest objects that are difficult to find!
From [Fayyad, et.al.] Advances in Knowledge Discovery and Data Mining, 1996
Classifying Galaxies
Early
Intermediate
Late
Data Size:
72 million stars, 20 million galaxies
Object Catalog: 9 GB
Image Database: 150 GB
Class:
Stages of Formation
Attributes:
Image features,
Characteristics of light waves received, etc.
Courtesy: http://aps.umn.edu
Clustering Definition
- Given a set of data points, each having a set of attributes, and a similarity measure among them, find clusters such that
- Data points in one cluster are more similar to one another.
- Data points in separate clusters are less similar to one another.
- Similarity Measures:
- Euclidean Distance if attributes are continuous.
- Other Problem-specific Measures.
Illustrating Clustering
Euclidean Distance Based Clustering in 3-D space.
Intracluster distances
are minimized
Intercluster distances
are maximized
Clustering: Application 1
- Market Segmentation:
- Goal: subdivide a market into distinct subsets of customers where any subset may conceivably be selected as a market target to be reached with a distinct marketing mix.
- Approach:
- Collect different attributes of customers based on their geographical and lifestyle related information.
- Find clusters of similar customers.
- Measure the clustering quality by observing buying patterns of customers in same cluster vs. those from different clusters.
Clustering: Application 2
- Document Clustering:
- Goal: To find groups of documents that are similar to each other based on the important terms appearing in them.
- Approach: To identify frequently occurring terms in each document. Form a similarity measure based on the frequencies of different terms. Use it to cluster.
- Gain: Information Retrieval can utilize the clusters to relate a new document or search term to clustered documents.
Illustrating Document Clustering
- Clustering Points: 3204 Articles of Los Angeles Times.
- Similarity Measure: How many words are common in these documents (after some word filtering).
Category
Total Articles
Correctly Placed
Financial
555
364
Foreign
341
260
National
273
36
Metro
943
746
Sports
738
573
Entertainment
354
278
Clustering of S&P 500 Stock Data
Observe Stock Movements every day.
Clustering points: Stock-{UP/DOWN}
Similarity Measure: Two points are more similar if the events described by them frequently happen together on the same day.
We used association rules to quantify a similarity measure.
Discovered Clusters
Industry Group
1
Applied-Matl-DOWN,Bay-Network-Down,3-COM-DOWN,
Cabletron-Sys-DOWN,CISCO-DOWN,HP-DOWN,
DSC-Comm-DOWN,INTEL-DOWN,LSI-Logic-DOWN,
Micron-Tech-DOWN,Texas-Inst-Down,Tellabs-Inc-Down,
Natl-Semiconduct-DOWN,Oracl-DOWN,SGI-DOWN,
Sun-DOWN
Technology1-DOWN
2
Apple-Comp-DOWN,Autodesk-DOWN,DEC-DOWN,
ADV-Micro-Device-DOWN,Andrew-Corp-DOWN,
Computer-Assoc-DOWN,Circuit-City-DOWN,
Compaq-DOWN, EMC-Corp-DOWN, Gen-Inst-DOWN,
Motorola-DOWN,Microsoft-DOWN,Scientific-Atl-DOWN
Technology2-DOWN
3
Fannie-Mae-DOWN,Fed-Home-Loan-DOWN,
MBNA-Corp-DOWN,Morgan-Stanley-DOWN
Financial-DOWN
4
Baker-Hughes-UP,Dresser-Inds-UP,Halliburton-HLD-UP,
Louisiana-Land-UP,Phillips-Petro-UP,Unocal-UP,
Schlumberger-UP
Oil-UP
Association Rule Discovery: Definition
- Given a set of records each of which contain some number of items from a given collection;
- Produce dependency rules which will predict occurrence of an item based on occurrences of other items.
Rules Discovered:
{Milk} –> {Coke}
{Diaper, Milk} –> {Beer}
TID
Items
1
Bread, Coke, Milk
2
Beer, Bread
3
Beer, Coke, Diaper, Milk
4
Beer, Bread, Diaper, Milk
5
Coke, Diaper, Milk
Association Rule Discovery: Application 1
- Marketing and Sales Promotion:
- Let the rule discovered be
{Bagels, … } –> {Potato Chips}
- Potato Chips as consequent => Can be used to determine what should be done to boost its sales.
- Bagels in the antecedent => Can be used to see which products would be affected if the store discontinues selling bagels.
- Bagels in antecedent and Potato chips in consequent => Can be used to see what products should be sold with Bagels to promote sale of Potato chips!
Association Rule Discovery: Application 2
- Supermarket shelf management.
- Goal: To identify items that are bought together by sufficiently many customers.
- Approach: Process the point-of-sale data collected with barcode scanners to find dependencies among items.
- A classic rule —
- If a customer buys diaper and milk, then he is very likely to buy beer.
- So, don’t be surprised if you find six-packs stacked next to diapers!
Association Rule Discovery: Application 3
- Inventory Management:
- Goal: A consumer appliance repair company wants to anticipate the nature of repairs on its consumer products and keep the service vehicles equipped with right parts to reduce on number of visits to consumer households.
- Approach: Process the data on tools and parts required in previous repairs at different consumer locations and discover the co-occurrence patterns.
Sequential Pattern Discovery: Definition
- Given is a set of objects, with each object associated with its own timeline of events, find rules that predict strong sequential dependencies among different events.
- Rules are formed by first disovering patterns. Event occurrences in the patterns are governed by timing constraints.
(A B) (C) (D E)
<= ms
<= xg
>ng
<= ws
(A B) (C) (D E)
Sequential Pattern Discovery: Examples
- In telecommunications alarm logs,
- (Inverter_Problem Excessive_Line_Current)
(Rectifier_Alarm) –> (Fire_Alarm)
- In point-of-sale transaction sequences,
- Computer Bookstore:
(Intro_To_Visual_C) (C++_Primer) –> (Perl_for_dummies,Tcl_Tk)
- Athletic Apparel Store:
(Shoes) (Racket, Racketball) –> (Sports_Jacket)
Regression
- Predict a value of a given continuous valued variable based on the values of other variables, assuming a linear or nonlinear model of dependency.
- Greatly studied in statistics, neural network fields.
- Examples:
- Predicting sales amounts of new product based on advetising expenditure.
- Predicting wind velocities as a function of temperature, humidity, air pressure, etc.
- Time series prediction of stock market indices.
Deviation/Anomaly Detection
- Detect significant deviations from normal behavior
- Applications:
- Credit Card Fraud Detection
- Network Intrusion
Detection
Typical network traffic at University level may reach over 100 million connections per day
Challenges of Data Mining
- Scalability
- Dimensionality
- Complex and Heterogeneous Data
- Data Quality
- Data Ownership and Distribution
- Privacy Preservation
- Streaming Data
Collepals.com Plagiarism Free Papers
Are you looking for custom essay writing service or even dissertation writing services? Just request for our write my paper service, and we'll match you with the best essay writer in your subject! With an exceptional team of professional academic experts in a wide range of subjects, we can guarantee you an unrivaled quality of custom-written papers.
Get ZERO PLAGIARISM, HUMAN WRITTEN ESSAYS
Why Hire Collepals.com writers to do your paper?
Quality- We are experienced and have access to ample research materials.
We write plagiarism Free Content
Confidential- We never share or sell your personal information to third parties.
Support-Chat with us today! We are always waiting to answer all your questions.