Thursday, February 27, 2020

Smart Database Design to Avoid Fault Data Research Paper

Smart Database Design to Avoid Fault Data - Research Paper Example This paper reveales the diverse ways of entering data into databases along with reasons of entered and stored poor quality data in databases and its impacts on the organizations. One of the reasons is improper database design, therefore in order to avoid poor quality data in databases, features of good database design along with guidelines for developing a smart database to avoid faulty data have been provided in this paper. Keywords: database design, data quality, avoiding faulty information, Garbage in Garbage out (GIGO), database normalization, smart database design. Introduction Today, each and every decision from solving particular problem to deciding future of an organization is based on availability, accuracy and quality of information. â€Å"Information is an organizational asset, and, according to its value and scope, must be organized, inventoried, secured, and made readily available in a usable format for daily operations and analysis by individuals, groups, and processes, both today and in the future† (Neilson, 2007). The organizational information is neither just bits, bytes saved in a server nor limited to client data, the hardware and the software that store it. A data or information to which an organization deals with is a process of gathering, normalizing and sharing that information to all its stakeholders. It might be difficult to manage this imperative huge information manually. This is the reason that databases are formulated and high in demand. A database facilitates to store, handle and utilize implausible diverse organization’s information easily. A database can be defined as â€Å"collection of information that is organized so that it can easily be accessed, managed, and updated† (Rouse, 2006). Developing a database is neither a complicated process nor complex for using and manipulating information stored in it. A database smoothes the progress of maintaining order in what could be an extremely chaotic informative environment. In databases, a collection of information is stored individually and its management entails preliminary indexing of existing data by categorizing the isolated saved information based on common factors (identity). It can be done through assigning values which signify appropriate condition (i.e. national identities, names, cell numbers, etc.). Undoubtedly, if the data gathering and storing process are malfunctioned, the established data will be incorrect as well; this process is known to be as Garbage in Garbage out (GIGO). Quality and accuracy of data are too critical and fundamental for a database developed/maintained by any organization, either the database is developed for achieving a small goal with limited scope or it is a multi-billion dollar information system. It can be said that the value of data is directly proportional to the quality of data. It is one of many reasons that an inadequately designed database may present incorrect information that may be complicated to utilize, or may even stop working accurately. Why Poor data Quality? As there are a number of ways to enter data in databases that include initial data conversion (data conversion from some previously existing data source), consolidating existing database with new database, manual data entry, batch feeds and real-time data entry interfaces, therefore, there are a plenty of diverse root causes currently subsist for storage of inaccurate and poor data quality in databases. Some of them are because of inappropriate database design whereas the others are due to external outage factors. The basis of these errors is a lot more than just stumble-fingered typographer (typo error). Some of the reasons of poor quality data except database design include receiving

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.