What are the services of Job Control?

Data MiningDatabaseData Structure

<p>There are various services of Job Control which are as follows &minus;</p><p><strong>Job definition</strong> &minus; The first step in creating an operations process is to have some way to define a series of steps as a job and to specify some relationship among jobs. This is where the structure of the data warehouse is written.</p><p>In some cases, if a load of a given table declines, it will impact your capacity to load tables based on it. For example, if the customer table is not properly updated, loading sales facts for new customers that did not make it into the customer table is risky.</p><p><strong>Job scheduling</strong> &minus; The operations environment needs to provide standard capabilities, like time- and event-based scheduling. Warehouse loads are continually based on some upstream system event, such as the successful accomplishment of the general ledger close or the strong application of sales adjustments to yesterday&rsquo;s sales diagrams. This contains the capacity to monitor database flags, tests for the continuation of files, compare creation dates, etc.</p><p><strong>Monitoring</strong> &minus; There are no self-respecting systems person would tolerate a black box scheduling system. The folks answerable for running the loads need to understand as much as applicable about what is going on. The system needs to provide information about what step the load is on, what time it started, how long it took, and so on.</p><p>In the handcrafted warehouse, this can be adept by having each step write to a log record or table as described next. A store-bought system must support a more visual means of keeping it is informed about what is happening. If it is sharing computing resources, the more refined systems will also communicate to us what else was running on the system during the data staging phase, provide us comparison reports with average times for each process, etc.</p><p><strong>Logging</strong> &minus; This means collecting information about the entire load process, not just what is happening at the moment. Log information provides the recovery and reestablishing of a process in case of errors during the job implementation.</p><p><strong>Notification</strong> &minus; The importance of this capability correlates closely with the number of users and their reliance on the warehouse. If you don&rsquo;t have multiple users and if they haven&rsquo;t developed to count on the warehouse being applicable when they required it, it can be able to wait until morning to find out the load failed and restart it.</p><p><strong>Error handling</strong> &minus; It must plan for unrecoverable errors during the load because they will happen. Your system should anticipate this and provide crash recovery, stop, and restart capability. First, look for tools and design your extracts to minimize the impact of a crash. For example, a load process should commit relatively small sets of records at a time and keep track of what has been committed. The size of the set should be adjustable since the transaction size has performance implications on different DBMSs.</p>
Updated on 09-Feb-2022 13:22:21