Please use this identifier to cite or link to this item: 192.168.6.56/handle/123456789/46160
Full metadata record
DC FieldValueLanguage
dc.contributor.authorAcademic, Kluwer-
dc.date.accessioned2019-02-21T07:50:40Z-
dc.date.available2019-02-21T07:50:40Z-
dc.date.issued2002-
dc.identifier.isbn0-306-47011-X-
dc.identifier.urihttp://10.6.20.12:80/handle/123456789/46160-
dc.description.abstractClassification decision tree algorithms are used extensively for data mining in many domains such as retail target marketing, fraud detection, etc. Highly parallel algorithms for constructing classification decision trees are desirable for dealing with large data sets in reasonable amount of time. Algorithms for building classification decision trees have a natural concurrency, but are difficult to parallelize due to the inherent dynamic nature of the computation. In this paper, we present parallel formulations of classification decision tree learning algorithm based on induction. We describe two basic parallel formulations. One is based on 6\QFKURQRXV7UHH&RQVWUXFWLRQ $SSURDFK and the other is based on 3DUWLWLRQHG7UHH &RQVWUXFWLRQ$SSURDFK We discuss the advantages and disadvantages of using these methods and propose a hybrid method that employs the good features of these methods. We also provide the analysis of the cost of computation and communication of the proposed hybrid method. Moreover, experimental results on an IBM SP-2 demonstrate excellent speedups and scalability.en_US
dc.language.isoenen_US
dc.publisherCreated in the United States of Americaen_US
dc.subjectdata mining, parallel processing, classification, scalability, decision treesen_US
dc.titleScaling Algorithms, Applications and Systemsen_US
dc.typeBooken_US
Appears in Collections:Building Construction

Files in This Item:
File Description SizeFormat 
10.pdf1.18 MBAdobe PDFView/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.