Computer Organization and Architecture

Computer  
Organization
 

The definition of the term “organizing” is to put together into an orderly, functional, and structured whole. The term Computer Organization refers to a conceptual understanding of the inner workings of computers. It includes emphasis on the system components, logical design, structure of instructions, computer arithmetic, processor control, assembly language programming and methods of performance enhancements. 

It is concerned with the way the hardware components operate and the way they are connected together to form the computer system. The various components are assumed to be in place and the task is to investigate the organizational structure to verify that the computer parts operate. 


Every computer essentially consists of five elements or units namely the arithmetic logic unit (ALU), the memory unit, the control unit, the input unit, and the output unit.





Input
: This is the process of entering data and programs in to the computer system. The input unit takes data from user to the computer in an organized manner for processing.

Central Processing Unit (CPU)
: The Central Processing Unit (CPU) takes data and instructions from the storage unit and makes all sorts of calculations based on the instructions given and the type of data provided. It is then sent back to the storage unit. We may call CPU as the brain of any computer system. It is just like brain that takes all major decisions, makes all sorts of calculations and directs different parts of the computer functions by activating and controlling the operations. The ALU and the CU of a computer system are jointly known as the central processing unit. 

The task of ALU is performing arithmetic and logical operations like addition, subtraction etc. Controlling of all operations like input, processing and output are performed by control unit. It takes care of step by step processing of all operations inside the computer. 

Memory: The process of saving data and instructions is known as storage. Data has to be fed into the system before the actual processing starts. It is because the processing speed of Central Processing Unit (CPU) is so fast that the data has to be provided to CPU with the same speed. All data and instructions are stored here before and after processing. Intermediate results of processing are also stored here. 

Output
: This is the process of producing results from the data for getting useful information. Similarly the output produced by the computer after processing must also be kept somewhere inside the computer before being given to you in human readable form. Again the output is also stored inside the computer for further processing. 

Computer  

Architecture

Computer architecture is the conceptual design and fundamental operational structure of a computer system. It is a functional description of requirements and design implementations for the various parts of a computer, focusing largely on the way by which the central processing unit (CPU) performs internally and accesses addresses in memory. 

Computer architecture is a structure of a computer that a machine language programmer must understand to write a correct (timing independent) program for that machine. In another word it is the user visible portion of the instruction set. It includes emphasis on logical design, computer design and the system design. It is concerned with the structure and behavior of computer as seen by user. 

Computer Architecture = Machine Organization (What the machine looks like) + Instruction Set Architecture (How you talk to the machine) 

Computer Architecture mainly specifies the relation between parts of a computer system like I/O devices, memory, processor etc. 




Benefits Of Studying 

Computer Organization 
And Computer Architecture 

The computer lies at the heart of computing. Without it most of the computing disciplines today would be a branch of theoretical mathematics. To be a professional in any field of computing today, one should not regard the computer as just a black box that executes programs by magic. All students of computing should acquire some understanding and appreciation of a computer system’s functional components, their characteristics, their performance and their interactions. There are practical implications as well. Students need to understand computer architecture in order to structure a program so that it runs more efficiently on a real machine. In selecting a system to use, they should to able to understand the tradeoff among various components such as CPU clock speed vs. memory size. 

Reason of studying computer architecture

1. Suppose a graduate enter the industry and is asked to select the most cost-effective computer for use throughout a large organization.An understanding of the implications of spending more for various alternatives, such as a larger cache or a higher processor clock rate, is essential to making the decision. 

2. Many processors are not used in PC’s or servers but in embedded systems. a designer may program a processor in C that is embedded in some real time or larger system such as an intelligent automobile electronics controller. Debugging the system may require the use of a logic analyzer that displays the relationship between interrupt requests from engine sensors and machine-level code. 

3. Concepts used in computer architecture find application in other courses. In particular the way in which the computer provides architectural support for programming languages and operating system facilities reinforces concepts from those areas. 

Computer organization and architecture encompasses a broad range of design issues and concepts. A good overall understanding of these concepts will be useful both in other areas of study and in future work after graduation. 

The Factors Involved That Prohibits Us 

From Speeding Up

Computer processing capability depends on several factors such as: 

- CPU clock speed 
 - Cache memory size and speed 
- Front Side Bus (FSB) 
- Computer RAM 
- Hard disk access and rotational speed 
- Power consumption and Cost 

For two CPUs with the same pipeline depth but different clock speeds, the higher clock speed gives it an advantage. One major reason that CPU clock speed has not gone up is simply that transistors themselves have not gotten a lot faster. Intel is currently manufacturing on 32nm HKMG, having moved from 45nm. Two years before that, it was 65nm and before that 90nm. However, the problem is that while transistors are getting smaller, they’re not getting faster. And to understand this, a bit of background on the MOSFET is necessary. 

Caches are small because the silicon used to build them is quite expensive and, especially on CISC-type CPUs, there might not be enough space on the chip to hold them. There is a trade-off among the three key characteristics of memory: cost, capacity, and access time. 

- Faster access time – greater cost per bit
- Greater capacity – smaller cost per bit
- Greater capacity – slower access time 

The front side bus (FSB) remains one of the biggest bottlenecks on system performance. FSB is the primary interface that connects a microprocessor to other system devices. Typically, the FSB allows the processor to communicate with main memory (RAM), the system chip set, and other peripheral buses. The speed of the FSB in addition to the speed of RAM determines computer speed more than the absolute clock speed of the CPU. But because FSB speeds do not increase as often or as dramatically as CPU speeds, processor manufacturers do not typically call attention to them, instead focusing on the raw speed of the processor. 

Power consumption is another factor in the design of modern computers. Power efficiency can often be traded for performance or cost benefits. The typical measurement in this case is MIPS/W (millions of instructions per watt).

0 comments:

Post a Comment