The size of data used by enterprises, academia and sciences in recently years has been growing at an exponential rate day by day. Simultaneously, the requirement to process and analyze the large quality of data is also increased. In the previous method, a single computer or a small number of computers cannot process and monitor these large amounts of data, but cloud system can handle the requirement and reduce the costs of data processing now. Therefore, lots of enterprises use the cloud system to process this problem. A basic framework of the cloud system is MapReduce. User must configure the relative setting including the number of computers and virtual machines before running the MapReduce. Each data size is not the same, and users may claim more or less computers and virtual machines than they need, and waste cloud resources or run out of resources. When the job is put in to cloud system, at first, it is processed by a single node for a period of time and if the node detects that the job cannot be completed within the period of time, the node ask another to share the computation. Then, all nodes continue processing until the end of the job. Therefore we proposed mechanism constructs hierarchical dynamic configuration of cloud system (HDCOCS) to efficiently use the resources in the cloud.