客服信息
我们支持 澳洲论文代写 Assignment代写、加拿大论文代写 Assignment代写、新西兰论文代写 Assignment代写、美国论文代写 Assignment代写、英国论文代写 Assignment代写、及其他国家的英语文书润色修改代写方案.论文写作指导服务
唯一联系方式Q微:7878393
热点文章
最新论文更多>>
- MapReduce System Analysis
- Types of side channel attacks'
- Types of side channel attacks'
- Web Server Development for Senso
- Propagation of uncertainty in SI
- Models of Data storage and its e
- Cyber plagiarism and intellectua
- Security attack in MANET .black
- An Overview of V2G Technology in
- Professional relationship automa
Net (neural network) evolution
A. A neural network typically starts out with random coefficients (weights); hence, it produces essentially random predications when presented with its first case. What are the key ingredients by which the net (neural network) evolves to produce a more accurate predication? (Please answer your question as clearly and concisely as possible.) (10 points)
Based on the textbook (Pages 159),”the main strength of neural networks is their high predictive performance. Their structure supports capturing very complex relationships between predictors and a response, which is often not possible with other classifiers”. The model structure of neural networks contains of multilayer, input, output and hidden layer, a fully connected network with a one-way flow and no cycles. Additionally, based on the textbook (p.164), “Training the model means estimating the weights 0_{j} and w_{ij} that lead to the best predictive results. The process for computing the neural network output for an observation is repeated for all the observations in the training set. For each observation the model produces a prediction, which is then compared with the actual response value. Their difference is the error for the output node”. “In neural networks the estimation process uses the errors iteratively to update the estimated weights. In particular, the error for the output node is distributed across all the hidden nodes that led to it, so that each node is assigned “responsibility" for part of the error. Each of these node-specific errors is then used for updating the weights”.
Based on Shmueli, Patel, and Bruce (2010, p.164), “The most popular method for using model errors to update weights ("learning") is an algorithm called back propagation. As the name implies, errors are computed from the last layer (the output layer) back to the hidden layers”. To summarize it, the algorithm is used in the training process, which takes the error rate after random values were assigned in the first cases. According to the error rate, the error rate propagates back the error to modify the corresponding weight and the theta values. By using the case updating method, better predictions can be provided. This process keeps repeating until it gets the most accurate predictions by reaching the lowest error rate and avoiding the overfitting.
B. Consider the Boston Housing Data file (The schema of the data file is given on page 27 in Table 2.2 of the textbook.). (40 points)
a. Study the Neural Networks Prediction example from the URL:
http://www.solver.com/xlminer/help/neural-networks-prediction-example, and following the example step by step.
b. Using XLMINER’s neural network routine to fit a model using XLMINER default values for neural network parameters by using the predictors such as CRIM, ZN, INDUS, CHAS, NOX, RM, AGE, DIS, RAD to predict the value of CAT.MEDV.
i. Record the RMS errors for the training data and the validation data, and observe the lift charts for repeating the process, changing the number of epochs to 300, 3000, 10,000, 20,000.
RMS Error |
||
# Epochs |
Training Data |
Validation Data |
300 |
3.278352369 |
3.333782301 |
500 |
3.018668061 |
3.303478278 |
3000 |
2.233952259 |
3.138707793 |
10000 |
1.423640371 |
2.694391127 |
20000 |
1.172903683 |
3.478699146 |
The RMS errors for the training data is less than the RMS error for the validation data. Moreover, the model looks like being overfit the training data after exceeding 10000 in the epoch number. Also, after repeating the process in the lift chart, the process works better for the training data and less for validation data where we get the greater area between the lift curve and the baseline.
ii. What happens to RMS error for the training data set as the number of epochs increases?
While the number of epochs is increasing, the RMS error in the training data is decreasing. Which means that there is an overfitting in the model for the training data at the point when the number of epochs reached 20,000.
iii. What happens to RMS error for the validation data set as the number of epochs increases?
As the same answer mentioned in the previous question, the same act of the RMS error in the training data is happening with the RMS error for the validation data. Except for when the number of epochs reaches 10,000, the RMS error for the validation data will go down, but after that the number will increase when it reached the point 20,000 in the number of epochs.
iv. Comments on the appropriate number of epochs for the model.
The appropriate number of epochs for the model would be 10000 epochs where it gets to the minimal rate of the RMS error in the validation data, avoid the overfit and reasonable error rate on the training data.
C. For Association Rule Mining, please define the following terms: (10 points)
a. Support
Based on the textbook (p.195), “the support of a rule is simply the number of transactions that include both the antecedent and consequent item sets. It is called a support because it measures the degree to which the data “support" the validity of the rule. The support is sometimes expressed as a percentage of the total number of records in the database”.
b. Confidence
Based on the textbook (p.196), “it is a measurement "that expresses the degree of uncertainty about the "if-then" rule. This is known as the confidence of the rule. This measure compares the co-occurrence of the antecedent and consequent item sets in the database to the occurrence of the antecedent item sets. Confidence is defined as the ratio of the number of transactions that include all antecedent and consequent item sets (namely, the support) to the number of transactions that include all the antecedent item sets":
Confidence = # Transactions with both antecedent and consequent item sets
# Transactions with antecedent item set
c. Lift
Based on the textbook (p.197), “the lift ratio is the confidence of the rule divided by the confidence, assuming independence of consequent from antecedent”.
Lift ratio = Confidence / Benchmark Confidence
“A lift ratio greater than 1.0 suggests that there is some usefulness to the rule. In other words, the level of association between the antecedent and consequent item sets is higher than would be expected if they were independent. The larger the lift ratio, the greater the strength of the association”.
D. Study the Association Mining example from the URL: http://www.solver.com/association-rules-example.
E. Problem 13. 3 on page 277-278 of the textbook, Data Mining for Business Intelligence: Concepts, Techniques, and Applications in Microsoft Office Excel with XLMiner, 2nd edition, 2010, by Galit Shmueli, Nitin R. Patel, and Peter C. Bruce, ISBN: 978-0-470-52682-8. The data file is attached. (40 points)
Cosmetics purchases: The data shown in Figure 11.6 are a subset of a dataset on cosmetic purchases, given in binary matrix form. The complete dataset (in the file Cosmetics.xls) contains data on the purchases of different cosmetic items at a large chain drugstore. The store wants to analyze associations among purchase of these items, for purposes of point of sale display, guidance to sales personnel in promoting cross sales, and for piloting an eventual time-of-purchase electronic recommender system to boost cross sales. Consider first only the subset shown in Figure 11.6.
1. Select several values in the matrix and explain their meaning.
We will use the table shown above as an example; the binary matrix has values 1 or 0. The value 1 points to the presence and 0 to the absence of the items in the transaction. An association rules can be created between items in this database, which contain a support count of at least 2 (equivalent to a percentage support of 2 / 12 = 16.6%). Therefore, the rules can be created based on the items that were purchased together for at least 16.6% of the transactions. So, by applying the rule, the items have count as follows:
Item |
Support (count) |
Nail Polish |
8 |
Blush, Nail Polish, Bruches, Concealer, Bronzer |
2 |
Nail Polish, Brushes |
6 |
Blush |
5 |
Brushes |
6 |
Concealer, Bronzer |
7 |
The first item, which is “Nail Polish”, has 8 transactions and it means that this item has been bought 8 times.
The second item is “Blush, Nail Polish, Bruches, Concealer, Bronzer”, which means that 2 transaction has been occurred for the whole 5 items together.
The third item is “Nail Polish, Brushes”, which means that 6 transactions has been occurred for the two items together.
The fourth item, which is “Brushes”, has 8 transactions and it means that this item has been bought 5 times.
The fifth item, which is “Blush”, has 8 transactions and it means that this item has been bought 6 times.
The sixth item is “Concealer, Bronzer”, which means that 7 transaction has been occurred for the whole 5 items together.
Consider the results of the Association Rules analysis shown in Figure 11.7, and:
2. For the first row, explain the "Conf. %" output and how it is calculated.
In Figure 11.7, in the first row, the value "Conf. %" means that 60.19% of the customers who bought Bronzer and Nail Polish were also bought Brushes and Concealer.
To calculate the Confidence (Conf. %), the below equation has been used (Textbook, p.196):
Confidence = # Transactions with both antecedent and consequent item sets (Support (aUc)) /
# Transactions with antecedent item set (Support (a))
= 62 / 103
= 0.601942
3. For the first row, explain the "Support (a), Support (c) and Support (a U c)" output and how it is calculated.
Support (a): 103 transactions that have occurred to the Bronzer and Nail polish. Which means that the Bronzer and Nail polish have been bought 103 times.
Support (c): 77 transactions that have occurred which means that the Brushes and concealer have been bought 77 times.
Support (a U c): here it means that the customers who bought the Bronzer and Nail polish were also bought Brushes and concealer, and the total of these transactions were 62.
4. For the first row, explain the "Lift Ratio" and how it is calculated.
For the first row, the lift ratio is 3.908713. Which means that the Bronzer, Nail Polish, Brushes and concealer are more possibly have been sold together in a single transaction compared to other transactions in the same table. On the subject of the calculation, the lift ratio has been calculated by dividing the Confidence /support(c).
5. For the first row, explain the rule that is represented there in words.
Rule #2 which it is located in the first row in the table, and it means that if a customer purchased the Bronzer and the Nail polish, she/he is more likely to purchase the Brushes and Concealer. In addition, the transactions occurred 62 times with the confidence 60.19 %.
Now, use the complete dataset on the cosmetics purchases, which is given in the file Cosmetics.xlsindex.
6. Using XLMiner, apply Association Rules to these data.
7. Interpret the first three rules in the output in words.
Rule #1: when the item Brushes is purchased, then this indicates that the item Nail Polish is also purchased, this rule has confidence of 100%, with the lift ratio of 3.571429 that we are most likely going to encounter this transaction compared to the whole transactions in the table.
Rule #2: when the item Nail Polish is purchased, then this indicates that item Brushes is also purchased, this rule has confidence of 53.21%, with the lift ratio 3.571429 that we are most likely going to encounter this transaction compared to the whole transactions in the table, and this has occurred in 149 transactions.
Rule #3: when the items Eyeliner, Mascara are purchased, then this indicates that items Concealer, Eye shadow are also purchased. As it is shown in the table, 114 transactions has been occurring with the confidence 65.14 %, and lift ratio 3.240938 compared to the whole transactions in the table.
8. Reviewing the first couple of dozen rules, comment on their redundancy, and how you would assess their utility.
The association rule is “what goes with what” based on the textbook. According to the static retail presentation “buy X together with Y”, the rules would be as follows:
Rule #1 and #2: when customers purchase Nail Polish, they are more likely are going to purchase Brushes. Therefore, making offers for these products would not be necessary for the customers who are going to purchase Brushes constantly will buy Nail Polish.
Rules #3 to #10: the following products; Eyeliner, Mascara, Concealer, Eye Shadow, and Blush are proper set of group for selling since these products are popular by customers.
Rules #11 to #14: suggest something similar for Lip Gloss, Eye Shadow, Foundation and Mascara.
The other 22 Rules: in general, Mascara is a good item as a companion with other product.
Work Cited
Shmueli, G., Patel, N., Bruce, P. (2010). Data mining for business intelligence. (2 ed.). Hoboken, NJ: John Wiley & Sons
Neural Networks Prediction - Example. Excel Solver, Optimization Software, Monte Carlo Simulation, Data Mining. Retrieved July 15, 2014, from http://www.solver.com/xlminer/help/neural-networks-prediction-example
Association Rules - Example. Excel Solver, Optimization Software, Monte Carlo Simulation, Data Mining. Retrieved July 17, 2014, from http://www.solver.com/association-rules-example
上一篇：Development of maximum power point tracking algorithm: A Review 下一篇：Extracting low-level features in content based image retrieval systems using open CV environment