TY - JOUR TI - Secure Data Duplication Process for Better Performance of Primary Databases AU - Mohd Nadeem AU - Md Ateeq Ur Rahman JO - International Journal of Scientific Research in Computer Science, Engineering and Information Technology PB - Technoscience Academy DA - 2017/12/31 PY - 2017 DO - https://doi.org/10.32628/IJSRCSEIT UR - https://ijsrcseit.com/CSEIT172666 VL - 2 IS - 6 SP - 159 EP - 163 AB - In Real-time scenario, the data duplication is available but not dynamically implemented. The purpose of this paper is to study the data deduplication and performance specially when dealing with remote server. Normally remote servers are not capable of detecting data deduplication as they are situated and programmed in such a way that their job is to accept the data from many users around the globe. At client side only, we can implement a technique or scheme where data deduplication can be detected and informed to the data owner to save cloud infrastructure. To Detect data being duplicated in cloud servers for more resources availability and fast performance. Storage data on remote servers requires attention on both security and consistency. The data owner can check and verify their data stored in duplicates in cloud server before uploading any new content from the client side. By introducing a new and novel technique this paper achieved the goal of detecting and instructing data duplication in cloud server before outsourcing.