Jump Diffusion Process Model Considering the Optimal Data Partitioning for Cloud with Big Data  
Author Tomoya Takeuchi


Co-Author(s) Yoshinobu Tamura; Shigeru Yamada


Abstract Recently, the cloud computing with big data is known as a next-generation software service paradigm. However, the effective methods of software reliability assessment considering the big data and cloud computing have been only few presented. Considering the cloud computing with big data, it is noted that it is managed by using several software, i.e., Hadoop and NoSQL are used as the big-datatargeted processing software, OpenStack and Eucalyptus are well-known as the cloud computing software. In particular, it is important to consider the optimal data partitioning in terms of cloud computing with big data. We propose the method of component-oriented reliability assessment based on neural network in order to consider the optimal data partitioning for cloud computing with big data in this paper. Moreover, we propose the method of systemwide reliability assessment based on the jump diffusion process model considering the big data on cloud computing.


Keywords Big Data, Cloud Computing, Software reliability, Jump diffusion process Model
    Article #:  22167
Proceedings of the 22nd ISSAT International Conference on Reliability and Quality in Design
August 4-6, 2016 - Los Angeles, California, U.S.A.