Free Computer Laboratory Literature Essay Sample
A computer is said to be a device that accepts input data, processes it and then stores it. The output information can then be retrieved in a series of stored commands. In technical terms, a computer can be said to be a combination of the processor, memory as well as the input and the output.The above description is what is known as the Von Neumann Model. The architect description was first advanced by Goldstein, Burks and Von Neumann in 1946. The model is named after John Von Neumann, who was a Hungarian mathematician and computer scientist; he made remarkable contributions in the above fields. Prior to his publications, a number of devices which resembled a computer had been built, but his model marked the breaking moment in the history of the computer development. An example is the work of Charles Babbage (1791-1871) who made the devices which were precursors of the modern computers. This paper will discuss all that a word ‘computer’ entails including a computer components and its functionality with the aim of providing an insight and understanding of the internal structure as well as the manner through which the computer functions.
In his publication, Babbage had pointed out that the data and the instructions were presented in a form of bits that were indistinctive, thus the data as well as the instructions can be stored on a common platform. Accessing mechanism and the context is what will differentiate the two; a program that counters accesses of the instruction and the effective register is used to access the data. Before the design of the single address space, several other devices that had distinct data and instruction memories had been built. An example is the Howard Aiken’s design, which later came to be known as Harvard Architecture. It had a separate data memory and instruction memory as well as different data paths.
A wide range of variations of Von Neumann architecture have been advanced in the modern devices such as the scientific calculators. They are known as tagged architecture. Many other variations have been advanced making the architecture to be an enduring and classical one. This architecture is remarkable in the sense that it is the base that unifies all the other designs since 1950. It harmonizes the design and the programmer interface. This is the feature that allows ease in the identification of the features of the modern computers such as the memory, input/output, central processing unit and the control. Several other models other than Von Neumann have been advanced but the simplicity of the initial Von’s model hinders the commercial production of these other models. Several other models have been advanced which includes the data flow architecture which requires a separate hardware that is used to detect the availability of the data to execute the desired command. An example is the Pentium pro which has decode units which fetch 20-30 instructions.
The three computer parts can be described as a nested state machine set. Basically, the memory will hold the instruction as well as the data. They are transferred to the logic and then the data having been processed is then returned to the memory.2
Computers have become an integral part of human functionality across all the fields. Since the 1991 introduction of the IBM personal computer, the reliability of them in the various work places and other uses has grown. The growth is in term of the geographical penetration as well as in the dependence in the accomplishment of various tasks.
This paper will analyze the computer components and its functionality with the aim of providing an insight and understanding of the internal structure as well as the manner in which the computer functions. The method that used in the research will be the literature review of the previous work that has been done on the subject as well as the laboratory literature of the computers.
The processor can be referred to as the main chip on a personal computer, because almost all the other hardware is working in collaboration to serve the processor. The processor is often referred to as the CPU an abbreviation for Central Processing Unit. It is the one that actually executes the processes. One of the major functions of the operating system is to increase the effective use of the operating system. In order to accomplish this function, a multiple co-existence of processes is allowed in the system. While those processes share the CPU, they can only be evaluated on a mutual exclusive bases, the system therefore allows fair allocation of the space on the CPU depending on the expected function. This scheduling objective of the CPU is what is referred to as the CPU management.
The operating system allocates the CPU space in time units and any other process that needs to be executed will have to wait for the expiry of the allocated space and time. The system is created in such a way that it will be more efficient in terms of time and space so that the time and the space allocated will be as minimum as possible, but enough to carry out the expected role. The allocation is done by a system program that is known as the CPU manager/ scheduler. The allocation is done at two levels; these include the higher level to control the multiprocessing degree which is known as the long term scheduling. At this level, it decides on the number of processes that are to be brought on the main memory for processing and the selection of those that are going to be differed. At the other level, the lower level, the objective is to allocate the process a space on the CPU as soon as an empty space is created. At that short level of scheduling, the CPU chooses the most appropriate space for the task that is to be accomplished. The short term scheduling is the improvement of the PC performance through the reduction of the response time, as well as increasing throughput.
This is an evaluation platform that determines how well the work has been done. The unit that is used to evaluate this work is the throughput which is defined as the unit work that has been completed per time unit. For instance, the applications that are completed per unit time are the system’s throughput. When the system is kept too busy, the utilization is raised. Utilization is the ratio of the busy time to the total time expressed as the percentage. Increasing the utilization does not necessarily imply the increase of the throughput as some roles are not from an application process; these are referred to as the system overhead. Other terms that are used in the measurement of the performances include the turnaround time which is the time between the submission and the completion of a particular task; response time – time taken for the processor to respond to a command. Proportionality, the predictability is also related the higher terms that are used in the performance evaluation associated with the turnaround time and the response time. Other statistical terms have been developed to denote various performance measures of the processor. Such example includes the variance of the response time which is interpreted to denote the service quality.
Consider a multiprocessor computer system with a finite number of CPUs that are identical in nature and are all connected to the same. The main memory is assumed to hold all the main memory and have to wait for their allocation turn on the CPU when a vacancy is created. They queue in an alternating manner as they are allocated the spaces on the CPU. The criterion that is used in the selection process is known as the priority. The figure below is a schematic representation of a typical flow of the processes in a system. The arrows point the direction of the process. A new process joins the ready queue and waits for allocation. Once allocated, the system executes the task and after that the process may leave, wait for more services from other processes, or remain suspended. In any of the above cases, the system will reallocate the CPU to another task.
The CPU scheduling policies are divided into two broad categories: these include the pre-emptive and non-pre-emptive ones. In the pre-emptive category, the process may be terminated by the system and the CPU space is retained for a re-allocation to another process depending on the importance of the process. In non-pre-emptive category, the process is allowed to stay in the CPU for as long as it takes to complete a desired task. One the one hand, pre-emptive is dangerous as infinite tasks may enter a CPU and produce a standstill in case of a programming error. On the other hand, the process will be pre-emptied in case of such an error, thus the later is more favourable. The only disadvantage of a pre-emptive scheduling is that a process may be aborted many times resulting in a delay in the completion of that particular task.8
A process descriptor is responsible for modelling the processes. The descriptor stores the execution context. A process running in the CPU will therefore own some hardware parts such as the memory management register and the register. The value of the register differs from those that are stored in the descriptor. For instance, the PC register latest value is not available in the descriptor, so, if the kernel wishes to suspend the ongoing process, the latest context of the hardware has to be stored in the process descriptor. If the kernel wishes to resume the process, it will be loaded from the descriptor before being executed. The process of switching the CPU from one context to the other is known as context switching / task switching / context exchange / process dispatch. When the CPU scheduler selects a new process to run, the scheduler is invoked only when there is an anticipation of context switch. Context switching activity resides in the kernel space and involves two distinctive paths. The first one is in the context of the current process which is saved in the descriptor. And then the hardware is loaded for the next process to run. The switched out process waits in privilege mode at the kernel.
Scheduling schemes
If the ready queue is loaded with many processes that are waiting to be allocated a space in the CPU, the operating system will select a process to be executed as soon as the space is available. A scheduling algorithm is designed based on the schedule that needs to be accomplished. They are unique and mutually exclusive and there is no single algorithm that is universally suitable for all the purposes. They occupy just a small portion of the CPU, but are supposed to be very efficient in the execution of the duties. Below are some examples of the algorithms that have been in use.
First come first serve (FCFS) scheduling: This is the simplest algorithm, as the name suggests, where the process is implement on a first-in first-out bases. The scheduler allocates the space to the request available at the start of the queue. In one version, a process holds the space allocated till completion even when it requires additional process, thus there is a disadvantage since it wastes a lot of time. Another version allows a process to leave the CPU and rejoin the queue when the space required is available. Due to the fact that the FCFS is a non-pre-emptive process, the average response time is not optimized thus being a disadvantage.
Short job first (SJF) scheduling: In this algorithm, the process with the lowest time requirement is executed first. In this regard a process requests a space in the CPU with specification in the time. It is allocated to the next available space. If two or more processes require similar time, then the FCFS is used. The complexity arises in practice since it is difficult to estimate the time required for the process and, besides, there is a possibility of some process being kept at a bay for long.
Shortest remaining time next (SRTN) scheduling: This is a scheduling that combines the next available space with the SJF. The limitation is that the execution time of each process is required. Thus, despite the theoretical capacity to offer the least response time average, its practical application is limited.
Priority based scheduling: This one is like a blanket to all the above algorithms. A defined priority is used to do the scheduling. It can be dynamic priority or a static one. The system assesses the priority of the ready queue processes and decides against the one in process, thus it can be suspended in favour of a higher priority task.
Round-robin (RB) scheduling: It is specially designed for interactive sharing time series and ensures a fast response time for the processes which are short. It’s a form of pre-emptive FCFS, but the process is allowed a specific time frame. Thus, if not completed within that time, it is pre-emptied and returned to the rear of the queue, so the process is repeated several times till the task is completed.
Multi level / multiple queues based scheduling: this scheduling aims at holding processes using multiple queues. They are scheduled from non-empty queue, but with the highest priority and then FCFS or RR is used in the queues. It is a generic way and many other formats can be generated from it. The priority holds the Kernel and the next group is stored in the real time process and finally the others with lower priority are stored at the background servers. Other forms of scheduling include multiprocessor scheduling, thread scheduling.
In general, scheduling through a simple part of the system is an important component for the overall functionality of the system. The long term is responsible for the degree of the multi processing while the short term decides on the allocation depending on the priority for efficiency.
Our features
300 words per page instead of 280
Free revision (on demand)
Discount system
Affiliate program
VIP services
Round-the-clock support
Memory
Memory refers to that hardware part of a computer that is responsible for the storage of the data temporarily. For instance, when the instructions are given through any of the input devices, it is usually stored in the memory and, similarly, when the results have been computed they are also stored in the memory. Thus, in general, memory can be said to be the hardware that holds the input instructions as well as the output results. The memory in a computer is boldly divided into two categories:
Primary Memory
This refers to the part of the memory that is used by the computer for the internal functioning. Owing to the important function in the internal computer functioning, it is often referred to as the internal memory. This memory is further categorized ito two parts:
Read Only Memory (ROM): This kind of memory is used to write the instructions which are stored there and the computer will only refer to it for accessing the instructions. In this memory, it is not possible for the computer to write data on it hence the name - read only memory. This is in the form of an electronic chip in which the data/ programs are written using special drivers. Those that cannot be erased are called program read only memory (PROM), while the erasable ones are known as erasable program read only memory (EPROM).
Random Access Memory (RAM): Unlike the ROM, RAM provides both reading as well as writing capabilities. Computer writes or reads instructions from it, thus at times it is referred to as read/write memory. The instructions that are given to a computer are written on the RAM and accessed from it by the CPU; and, then, the results prior to the output device processing are as well stored on it. The data on the RAM will remain there as long as the computer is on and it gets erased as soon as the computer goes off, thus it is known as the temporary memory.
Cache memory enhances the reading as well as the writing speed, thus enhancing the execution speed of a computer. The instructions are accessed while transferring from RAM to the cache and thus are easier to access by the processor. Similarly, the computer writes the instructions on the cache which are then transferred to the RAM from there. Since the speed of writing/reading on/from the cache is fast, it therefore improves the overall speed of the computer.
Secondary Memory
This is the stable memory in a computer which runs even when the computer is switched off. It is used for the storage of the data and programs, especially when they are not in use. It includes the devices such as the floppies, CD-ROM, hard discs, and magnetic tapes. They store the data for a longer period and are cheaper though slow if compared to the RAM. This memory type is subdivided into two types:
Sequential: In this type, the user needs to search the data in a sequence manner from the beginning to the point where the required data will be found. The advantage of this kind of memory is that it is cheap, but inefficient in terms of the speed. An example of it is the magnetic tape.
Direct: Direct secondary memory is also known as random access. The implication is that the user can go directly to the information that is required. There is no sequence that is required to access the information thus, making the process of the access of the data easier.
Input and Output Elements
These are devices that are used to feed the instructions to the computer as well as displaying the output from the computer.
To/From the User: The input devices are used to provide input into a computer for the processing. The inputs may be data, program or even a response to a prompt from the computer. The user feeds the instructions into a computer through the use of hardware devices such as the mouse, keyboard, smart card readers, and microphones, scientific measuring equipments which transmits the recorded data to a computer, joysticks, webcam and USB amongst others. The devices that are used to display the results from the processor include the monitor, printers and faxes.
To/From the Network: The processed information is displayed on the screen, from which the user can disseminate it to the desired location through emails, faxes, printed copies. Owing to the needs for secure storage facilities, technologies have been developed so that they allow the storage of the data in a remote location. Thus, to access the data, one needs the internet connection to download the desired information. These devices include modems, infra-red devices as well as the net work adapters.
Support Elements
These are the devices that facilitate and enhance the efficient use of the computers. They include the software as well as hardware that are specifically designed for the compliment of the use of the computer. Examples include the PDF reader for securing and reading documents, portable USB readers which facilitate the physical transfer of the data, antivirus programs which protect the computer against malwares. These include spy ware programs and program exploit.
Computers are devices that accept and process data which is stored in a retrievable format that can be accessed by the user when needed. The model that is used was first described by a mathematician and a computer scientist John Von Neumann. The structure of a computer includes the processor, memory and the input/ output devices. The processor is an important component as it is the one that disseminates the instructions that are given by the user. All the other components are designed to complement the processor. Several terms that are associated with the processor were discussed in the paper. These include performance matrices, context switch and system models among others. In order to efficiently perform the numerous instructions that are given, a support device known as the scheduler is used to regulate the allocation of the processes in terms of space as well as time based on various aspects.
The memory stores the instructions as well as the processed information. It is broadly divided into the primary and the secondary. The primary includes the RAM and ROM. They are particularly important in the processing. The secondary is important for long term storage of the processed information as well as the raw data. The input and the output devices are responsible for the feeding of the instructions and displaying the processed information respectively.