Lecture 8


LEARNING GOALS:

1. Learn the general features of the Von Neumann machine.

2. Learn about some general features of modern computer architectures.

3. Learn about a simplified version of the 68000 processor.

4. Learn about the fetch/execute cycle.

5. Learn the general features of an assembly language program.


TABLE OF CONTENTS

8.1 INTRODUCTION

8.2 THE VON NEUMANN MACHINE

8.3 AN OVERVIEW OF REAL COMPUTERS

8.4 A SIMPLIFIED MODEL OF THE 68000

8.5 EXECUTING AN INSTRUCTION

8.5.1 RTL
8.5.2 The fetch / execute cycle
8.6 INTRODUCTION TO THE ASSEMBLY LANGUAGE PROGRAM

8.7 SUMMARY
 


8.1 INTRODUCTION

Computer languages are organized in a hierarchical structure, as it can be seen from Figure 8.1. Usually there are several levels of organization behind the application interface that is accessible to the user. Consider a program that simulates the game of chess. The user communicates with the computer through an application level language. A valid command in an application level language could be: "Move the white queen to E5", which can be entered in the computer by means of mouse clicks, or by typing some words on the keyboard. However, the computer does not understand application level language directly, otherwise we would be creating artificial brains by now. Under the application level language comes the high level language, e.g. C, in which a programmer has written all the programs necessary to handle the application level commands. Very few computers are able to directly execute high level language programs. The great majority of computers need compilers (which are themselves programs) in order to translate high level programs to assembly language (low level) code, which is the level at which most computers operate. Assembly language is the "native" language of the machine.

It is important to note that, while high level language programs are essentially system-independent, i.e. they can run on virtually any computer, assembly language programs are very system-dependent, i.e. an assembly language program written for one processor can be ran on that processor only, or on a group of closely related compatible processors. If you learn to program in the assembly language for the M68000 processor, you will not be able to write assembly language programs for the Intel 80x86 family, for example. However, the philosophy behind the design of an assembly language program is the same from one system to another.

Technically speaking, assembly language code is a human readable representation of the numerical code machines actually work with. Thus, the assembly language level can also be called the machine code level.

All the computer organization levels that we have talked about so far have one thing in common: they are all software levels. If we go further down, then we have to leave the software domain and enter into the hardware domain: the area of electronics, circuitry and silicon that we are not concerned much with in this course. The heavy line in Fig. 8.1 that separates software from hardware is called the hardware-software barrier. As we advance with the course, we will see that this barrier is by far not as clear-cut as it appears, and as years go and technology advances, it will become harder and harder to distinguish where software stops and where hardware begins.

Figure 8.1
The hierarchical levels of organization in the computer

 

 
But why do we have to study programing in assembly language, when it is so much easier and simpler to program in a high level language like Pascal, C, C++, or Java? And why do we study the Motorola 68000 family of processors in particular, when there are so many other newer and more powerful processors?

Here is the answer to the first question: nothing reflects better the the architecture and the organization of a computer system than the nature of its assembly language. Assembly language allows you to program directly on hardware. After all, assembly language is just a human-readable symbolic representation of the numeric code digital machines work with: this is their native language.

A program written in a high level language does not show the actual steps taken by the processor to execute the program; the assembly language program does. The assembly language programmer is fully in control of the actions that the computer will take; high level programmers have to rely on compilers to generate the assembly language code of their high-level programs, which is in general less efficient than the equivalent assembly language code generated by a human expert. A program written in assembly language is executed much faster than the equivalent program written in a high level language.

Of course, assembly language programming is highly prone to errors and very hard to debug, that's why in practice people rarely write entire programs in assembly language. However, knowledge of assembly language can be very useful in optimizing some key parts of a high level program. On average, 90% of the work in a program is done by 10% of the code. If those particular 10% of the code can be written in Assembler (Assembler is used as a shorthand for assembly language), that would result in an incredible improvement in speed.

Here is the answer to the second question: the Motorola 68000 family of processors is considered to be one of the most elegant examples of modern day computer architectures. It is a very good educational tool, because its architecture is relatively simple (especially if compared to the Intel family of chips), yet it is very functional and it reflects all the main aspects of the philosophy behind the design of a modern computer processor.
 

Back to the top of page


8.2 THE VON NEUMANN MACHINE

Computing machines nowadays exist in many different forms. If we ignore our old friend the abacus, they range from the simple controller of your washing machine, which is at the sub-computer level, to the human brain itself. This course concentrates on the so-called Von Neumann Machine. Von Neumann was a scientist who laid out in the 1940's the general features of today's mainstream computer architectures.

The Von Neumann machine is based on the common memory system that stores both the instructions that determine the course of action taken by the computer and the data operated on by those instructions.

Let's examine the memory system of a simple computer.

The memory of a computer is an array, or a collection, of many storage units called memory locations. If you like, you can view the memory as a big box subdivided into many little boxes - the memory locations. The big box has an input terminal (the address port), an input/output terminal (the data port) and a control terminal that forces the memory to execute a memory read or a memory write. A port in the computer world is similar to a port in the human world. It is a place where information enters and/or leaves. Each one of the little boxes, or memory locations, has a unique address, which is used to refer to that particular location. The contents of a memory location are called data.

The Central Processing Unit ( CPU ), also called the processor, is responsible for reading instructions from memory and executing them. It is the "brain" of the computer. It is connected to the memory by means of three buses:

Figure 8.2
The CPU and the memory

 

 
A bus is a collection of lines (wires) along which information in the form of electrical signals flows from one part of the computer to another.

The address bus transmits the address of the memory location the CPU wants to access. Information can flow on the address bus in one direction only: from the CPU to memory. The data bus is used to communicate data from the CPU to memory and vice versa. Thus, the data bus is bi-directional, i.e. data can flow in both directions, but not in both at the same time. The control bus is used by the CPU to inform the memory system whether a read cycle or a write cycle is taking place. During a read cycle, data is read from memory and sent to the CPU, and during a write cycle, data is sent by the CPU to the memory to be stored.

The data that the CPU reads from and writes to memory can be anything: an instruction, a number, even an address of another memory location. Remember that memory stores both instructions and the data operated on by those instructions.

The most general way of describing how a Von Neumann machine works is through the Von Neumann Module, which we write in Pascal-like pseudocode:

Module VonNeumann
    L:= 0
    REPEAT
            Get the instruction from memory location L
            EXECUTE the instruction
            L := L + 1
    UNTIL told to stop (may take forever)
END Module

Remember that a program is a sequence of instructions with associated arguments. Those instructions and their associated arguments are translated into numerical code that the computer can understand, and they are stored in memory.

The above module simply shows that the Von Neumann machine takes an instruction located in the memory location whose address is given by L ( L points to the memory location), it executes it, it increments the value of L so that L always gives the address of the next instruction to be accessed. The loop continues until all instructions have been executed. It assumes that the first instruction is located at address 0. Note that the instructions are executed in a sequence. This is not what always happens in the real world.

We are now going to look at how a Von Neumann machine executes an instruction.

We will assume that an instruction is a binary sequence divided in two fields: an operation code and and operand address. Later we will see that there are many other possible formats. The operation code (or op-code) is a number which specifies what action needs to be taken, and the operand address informs the CPU where the operand taking part in the operation is located in memory. The following pseudocode algorithm describes in general terms how an instruction is executed.

Module EXECUTE the instruction
    Decode the instruction
    IF operation requires additional data from memory THEN fetch data from memory
    Perform the operation
    IF instruction requires data to be stored in memory THEN store data in memory
END Module

When an instruction is read from memory, it is decoded by the CPU. The CPU looks at the op-code, which tells whether the operation is an addition, a division, a data movement operation from one memory location to another, etc. Then, if the instruction needs additional data from memory, the CPU reads the memory again at the address specified in the operand address field. After the CPU has all the data it needs, it can perform the operation specified by the op-code. If the instruction requires data to be written to memory, e.g. if the result of a subtraction has to be stored, then the data is stored in memory.

You can see that there are on average two or more accesses to memory for each instruction: one to read the instruction itself, and one or more to read the operand(s) of the operation. For example, if you want to take the contents of memory location 101, add them to the contents of memory location 102 and store the result in location 103, then you need to perform 3 reads from memory and one write to memory: you need to read the instruction itself, fetch the contents of memory location 101, fetch the contents of memory location 102, and after having performed the addition, you need to store the result in location 103.

Since all this information has to be continuously exchanged between the CPU and the memory, the path joining the CPU and the memory has been called the Von Neumann bottleneck. The Von Neumann bottleneck is one of the limiting factors of the Von Neumann machine, since the width of the information path sets a limit to the maximum possible volume and speed of information flow between the CPU and the memory.

Figure 8.3
The Von Neumann bottleneck

 

 
Back to the top of page


8.3 AN OVERVIEW OF REAL COMPUTERS

After having examined some of the features of the Von Neumann machine, we will look at some of the features found in real computers.

Figure 8.4

 

 
We've already met with the CPU (Central Processing Unit). It is composed of:

    - Registers
    - The ALU (Arithmetic & Logic Unit)
    - The CU (Control Unit)

Registers are special, small, high-speed devices used for storage of data. They can hold either data as such, or the address of a memory location where data of interest can be found. The fact that registers are found on the CPU greatly reduces the time it takes to access their contents. Registers are used for storage space during the execution of a program, because it is much more handy and faster to manipulate numbers stored in registers than the same numbers stored in memory. Only assembly language programs can manipulate registers. There are different types of registers:

We will see them in more detail when we examine how instructions are executed in the fetch/execute cycle.

The ALU (Arithmetic & Logic Unit), as its name suggests, performs arithmetic and logic operations, i.e. calculations and comparisons.

The CU (Control Unit) is the decision taking unit. It interprets the bit pattern of an instruction and decides which actions are to be taken.

The CPU is made of one monolithic hunk of silicon. The silicon block is treated with impurities in order to make it a semiconductor. It is possible then to install several millions of basic transistors on the silicon block, and arrange them as to form logic gates, which can be programmed to perform specific actions.
 

The term memory refers to storage areas within the computer. Technically speaking, the term memory is used to refer to storage of data on chips, while the term storage refers to data stored on disks, tapes, etc. Memory is used sometimes as a shorthand for Physical Memory, which refers to the actual chips used to store data.

Every computer has a certain amount of physical memory, also called main memory or RAM: Random-Access Memory. This is the type of memory that we saw earlier when discussing the Von Neumann machine. It stores both instructions and data. As we already said, it can be considered as an array (collection) of small memory locations. Usually each memory location has the capability of storing one byte of information. Thus, 1 Megabyte of memory means that this memory is divided into about 1 million locations and is capable of holding up to one million bytes of data. Main memory is used as a temporary storage area while the computer is operating. Secondary storage is a term that refers to the hard disk, floppy disks and other devices that store data on the long term.

There are several types of memory:

RAM is called Random Access Memory because any memory location can be accessed directly, without scanning the neighbouring locations, as long as its address is known. In general, devices allowing their data to be accessed in any order (at random) are called random access devices, while devices whose data can be accessed only in sequential order are called sequential access devices. Disks are random-access devices. They are divided in sectors, and any sector can be accessed directly when the disk is spinning. Tapes, on the contrary, are sequential access devices. There is no way to read data at the beginning of a tape, then at the end of the tape, without passing through the middle of the tape.
  A cache is a special high-speed memory device sitting between the memory and the CPU. Depending on the type of cache and the system they are used on, cache sizes can vary from 8 Kbytes to 1 or 2 Megabytes. Caches are used to hold information frequently accessed by programs. Since reading information from the cache is several times faster than reading the same information from RAM, caches turn out to speed up a lot the workings of the computer. If you are interested in more detailed information about caches, please click here.
  Every real computer system has to have input/output devices, which allow human users to communicate with the CPU. Without I/O devices, a computer would be completely useless. I/O devices include the keyboard, mouse, screen, printer, speakers, etc. Many of those devices are serial devices, meaning that they send and receive data one bit at a time, or one character at a time. E.g., data is sent from the keyboard one character at a time, and data is written to the screen one character at a time. The Baud rate refers to the rate of data transfer.

Any external device attached to the computer is referred to as a peripheral, and I/O devices are in that sense  peripherals. Each peripheral has at least one port (a port is the place where information enters and leaves a device).

The CPU "sees" its peripherals as memory locations having a special address. That means that any processor capable of accessing memory can also perform input/output by using the existing CPU-memory data paths. From the CPU's point of view, an I/O port is nothing more than a memory location like any other one, and when it needs to send or receive data to/from the port of that particular peripheral, the CPU behaves exactly as if it were sending/receiving data to/from a regular memory location. This arrangement is called memory-mapped I/O (more on that in future lectures), since the I/O ports are mapped onto the existing memory interface.
 

As we already said, buses are collections of parallel wires used to transfer data from one computer part to another. Each of the wires that compose the bus transmits one bit. Thus, the width of the bus determines how much data can be transmitted at one time. For example, a 16-bit bus is composed of 16 parallel lines and is able to transmit 16 bits at a time, while a 32-bit bus is able to transmit 32 bits at a time. The speed of a computer therefore depends much on the width of its buses. The Von Neumann bottleneck results from the fact that the the width of even the widest bus is negligible to the the quantity of information that could potentially flow within a computer.

Every bus is composed of at least two parts: an address bus, which transfers information about where data should go, and a data bus, which transfers the actual data. As we already know, there may also be a control bus, which transfers information specifying what to do with the data: read, write, etc. There are several types of buses, used for specific purposes.

What is the best way of connecting the different parts of a computer with buses?

One way would be to connect them point-to-point, i.e. to have a bus connecting each device with every other device. However, for n devices that would mean ~n2 connections (buses). Such a connection method would easily result in a big mess.

Most modern computers have a common bus, called the System Bus, which is connected to each part of the computer, the CPU and the memory.

Figure 8.5
The System Bus
 

 
There are computers that are called modular microcomputers. Their components are grouped into modules: there is a CPU module, encapsulating the CPU and other related devices; a memory module, encapsulating memory devices; a peripherals module; etc. In these computers, the System bus is connected to each of the modules, and within each module, there are local buses that interconnect the components of the module.

Now that we've looked very briefly at some of the hardware parts found in most modern day computers, we will concentrate on a (very simplified) model of the Motorola 68000 chip, and we will see how it actually executes instructions.
 

Back to the top of page


8.4 A SIMPLIFIED MODEL OF THE 68000

Here is a very simplified model of the structure of a 68000 processor:
 
Figure 8.6

 
Its functional units are:

MAR: The Memory Address Register, which holds the address of the next location to be accessed in memory. The contents of the MAR point to to that location. For example, if the contents of the MAR are 200, then the location with address 200 will be the next one to be accessed by the CPU.

MBR: The Memory Buffer Register, which holds the data just read from memory, or the data which is about to be written to memory. All information flowing into or out of memory has to pass through the MBR. In Computer Science, the term buffer is often used to refer to a device used for temporary holding of data waiting to be processed or data that has just been processed and waits to be stored.

IR: The Instruction Register, which holds the most recently read instruction from memory while it is being decoded by the CU.

PC: The Program Counter, also called the Instruction Counter, which is a register holding the address in memory of the next instruction to be executed. After an instruction is fetched from memory, the PC is automatically incremented to hold the address of, or point to, the next instruction to be executed.

D0: A general-purpose data register used to hold data of any kind. The data it holds is either to be used by the processor, or is the result of an operation performed by the processor. We will soon see that the real 68000 has 8 such general-purpose data registers, numbered D0, D1, D2,..., D7.

ALU: The Arithmetic & Logic Unit - performs calculations and comparisons.

CU: The Control Unit - interprets the bit pattern of the instruction (its op-code) in the IR and is responsible for taking the necessary steps to execute it.

This is the simplified model of the 68000 we will start to work with, and we'll add more and more new details and components as we go along.

Back to the top of page


8.5 EXECUTING AN INSTRUCTION

8.5.1 RTL

Before actually attacking assembly language programs and the description of how an instruction is executed, it is worthwhile introducing a notation called RTL (stands for Register Transfer Language), popularized by Hill and Peterson. RTL is a pseudocode notation that helps defining the action of assembly language instructions. This notation is very simple, it has very few rules, it mimics high-level languages and will help us greatly in understanding the flow of data within the computer.

Here are the rules that you need to know:

Back to the top of page

8.5.2 The fetch / execute cycle
 

In order to see how a computer executes an instruction, we will break the action into a series of steps. We've already mentioned that computers operate in a two-phase mode called the fetch/execute cycle. During the first phase (the fetch phase) of the cycle, the instruction is fetched from memory and brought to the Control Unit for decoding. In the second phase (the execute phase), the instruction is executed. We will first meet with the fetch phase of the cycle.

The first step of the fetch cycle is: [MAR] <- [PC]

That means that the contents of the program counter are transferred to the Memory Address Register. Remember that the PC contains the address of the next instruction to be executed, and the role of the MAR is to hold the address of the next memory location to be accessed in memory. Note that here we assume that the size of one memory location is big enough to hold one entire instruction. At the end of this step, the CPU is ready to access in memory the location that contains the next instruction to be executed.

Figure 8.7
[MAR] <- [PC]

 
Once the PC has done its job, it is automatically incremented (we assume it is incremented by 1) to point to ("to point to" and "hold the address of" mean the same thing) the "new" next instruction to be executed. This is the second step:
[PC] <- [PC] + 1

Figure 8.8
[PC] <- [PC] + 1

 
The third step is to read the contents of the memory location pointed at by the MAR.
[MBR] <- [M([MAR])]
The above RTL expression means the contents of the memory location whose address is given by the contents of the MAR are transferred to the MBR. In other words, the memory buffer register now contains the instruction that is to be executed next by the CPU.

Figure 8.9
[MBR] <- [M([MAR])]

 
In the fourth step, the contents of the MBR are copied to the Instruction Register.
[IR] <- [MBR]
Remember that the role of the IR is to hold the instruction while it is decoded by the Control Unit, the Control Unit being the decision taking unit.

Figure 8.10
[IR] <- [MBR]

 

The above four steps make up the fetch cycle. Here is a summary of the fetch cycle:

FETCH   [MAR] <- [PC]
        [PC] <- [PC] + 1
        [MBR] <- [M([MAR])]
        [IR] <- [MBR]

Now we are going to consider the execute cycle.

Let's assume that the instruction just fetched from memory is: add the contents of memory location L to the contents of the data register D0 and store the result in D0. The assembly language form of this instruction is: ADD L. The instruction supplies only one operand address, because the other operand is the data register D0, and our hypothetical computer has only one data register D0. The RTL form of this instruction would be: [D0]<-[D0]+[M(L)].

The first step of the execute cycle would therefore be to load the MAR with L, which is the address of the operand in memory. L is stored in the address field of the IR.
[MAR] <- [IR(Address_Field)]

The second step would be to read the actual operand from the memory location whose address is given by the contents of the MAR.
[MBR] <- [M([MAR])]

Now that we have the operand, all that is left is to perform the addition: this is the third step.
[D0] <- [D0] + [MBR]

Figure 8.11
The data flow during the execution phase

 

Here is a summary of the whole fetch/execute cycle. The first four step belong to the fetch phase, and the remaining steps belong to the execute phase.

FETCH   [MAR] <- [PC]
        [PC] <- [PC] + 1
        [MBR] <- [M([MAR])]
        [IR] <- [MBR]
EXECUTE [MAR] <- [IR(Address_Field)]
        [MBR] <- [M([MAR])]
        [D0] <- [D0] + [MBR]

Back to the top of page


8.6 INTRODUCTION TO THE ASSEMBLY LANGUAGE PROGRAM
 

We restate the definition of assembly language:

Assembly language is a human readable representation of the numerical language machines work with. In assembly language, numerically coded instructions are replaced by mnemonics, and addresses and constants are usually written in symbolic form (just as in high-level languages).

Mnemonics are just an aid to memory, e.g. ADD is the mnemonic for the instruction that adds two values. Symbolic names are chosen by the programmer to stand for constants or variables. Valid symbolic names are formed of a sequence of letters and digits starting with a letter.

An assembly language program is written using any suitable text editor. The layout of the parts of the program has to follow certain rules that we will examine soon. The text file containing the source code of the program is called the source file. A program called an assembler translates the source file into the machine's binary code. The translation generated by the assembler is usually written in a file called the object file (also called the binary file), which contains the actual executable numerical code. The object file can be passed to the machine for execution, or can be stored for later use.  The assembler can also create a listing file that contains the assembly language program after assembly together with any messages. The listing file can be stored on disk, or directed to the screen or to a printer. We will cover assemblers in more detail in the second part of the course.

Here is how a 68000 assembly language program might look like:

                 ORG     $1000 
PRIMEARRAY       DS.W    256 
N                DC.W    25 
* 
*  THIS SPACE AVAILABLE FOR ADVERTISEMENT 
* 
*  (Lines starting with an asterisk are ignored by the Assembler) 
* 
                 CLR.L   D0                D0 = INDEX = 0 
                 MOVE.L  #3,D1             D1 = ODDNUMB = 3 
                 MOVE.L  #2,D2             D2 IS CURRENT 
* 
                 LEA     PRIMEARRAY,A0      
                 MOVE.W  #2,(A0)           2 IS THE FIRST PRIME NUMBER 
* 
                 MOVE.W  (A0,D0.W),D3      MOVE PRIMEARRAY[INDEX] TO D3 
                 MULU    D3,D3             D3 = SQUARE(PRIMEARRAY[INDEX]) 
* 
                 MOVE.W  N,D7 
                 ASL     #1,D7 
* 
WHILE            CMP.W   D1,D3 
                 BLS     ELSE 
* 
                 LEA     2(A0),A1        Here, the size of an int is 2 bytes 
                 MOVE.W  D0,D5 
FOR              SUB.W   #2,D5 
                 CMP.W   #2,D5 
                 BLT     EXIT_FOR 
                 CLR.L   D6 
                 MOVE.W  D1,D6 
                 DIVU    (A1)+,D6 
                 SWAP    D6 
                 CMP.W   #0,D6 
                 BEQ     INCREMENT 
                 BRA     FOR 
* 
EXIT_FOR         MOVE.W  D1,(A0,D2)        D1 CONTAINS A PRIME NUMBER 
                 ADD.W   #2,D2 
INCREMENT        ADD.W   #2,D1 
                 BRA     CHECK_WHILE 
* 
ELSE             ADD.W   #2,D0  INDEX++ 
                 MOVE.W  (A0,D0.W),D3      MOVE PRIMEARRAY[INDEX] TO D3 
                 MULU    D3,D3             D3 = SQUARE(PRIMEARRAY[INDEX]) 
* 
CHECK_WHILE      CMP.W   D7,D2 
                 BCC     ENDPROG 
                 BRA     WHILE 
* 
ENDPROG          END     $1000 
 

Please don't try to understand what this program is doing, as we include it only to show you the general layout of an assembly language program. Okay, if you really insist, this program calculates prime numbers and stores them into an array. You'll see more beasts of this kind towards the end of the semester, maybe in assignments, or tests... In fact, this program is a variation on a solution of one of the course's past assignments.

What you should remember from your first encounter with a 68K program is that:

Since we already mentioned literals, it might be time to give a more formal definition of the term literal, which we'll encounter often in the future.

Definition:
A literal is an operand that is used directly as an actual value by the computer, as opposed to a reference to a memory location.

As we just said, to be considered a literal a symbol must be preceded by the # sign. The difference between ADD 5,D0 and ADD #5,D0 is that the first instruction adds the contents of memory location 5 to the data register's contents, while the second instruction adds the number 5 to the contents of data register D0. In both cases, the result is stored in D0.
 

Now, we are going to provide you with the listing file created by the Assembler for this particular program.
 

Source file: EXAMPL.X68 
Assembled on: 98-06-05 at: 15:21:36 
          by: X68K PC-2.1 Copyright (c) University of Teesside 1989,93 
Defaults: ORG $0/FORMAT/OPT A,BRL,CEX,CL,FRL,MC,MD,NOMEX,NOPCO 

    1  00001000                        ORG       $1000 
    2  00001000 00000200     PRIMEARRAY: DS.W      256 
    3  00001200 0019         N:        DC.W      25 
    4                        * 
    5                        *  THIS SPACE AVAILABLE FOR ADVERTISEMENT 
    6                        * 
    7                        *  (Lines starting with an asterisk are ignored by the Assembler) 
    8                        * 
    9  00001202 4280                   CLR.L     D0                    ;D0 = INDEX = 0 
   10  00001204 223C00000003           MOVE.L    #3,D1                 ;D1 = ODDNUMB = 3 
   11  0000120A 243C00000002           MOVE.L    #2,D2                 ;D2 IS CURRENT 
   12                        * 
   13  00001210 41F81000               LEA       PRIMEARRAY,A0 
   14  00001214 30BC0002               MOVE.W    #2,(A0)               ;2 IS THE FIRST PRIME NUMBER 
   15                        * 
   16  00001218 36300000               MOVE.W    (A0,D0.W),D3          ;MOVE PRIMEARRAY[INDEX] TO D3 
   17  0000121C C6C3                   MULU      D3,D3                 ;D3 = SQUARE(PRIMEARRAY[INDEX]) 
   18                        * 
   19  0000121E 3E381200               MOVE.W    N,D7 
   20  00001222 E347                   ASL       #1,D7 
   21                        * 
   22  00001224 B641         WHILE:    CMP.W     D1,D3 
   23  00001226 63000030               BLS       ELSE 
   24                        * 
   25  0000122A 43E80002               LEA       2(A0),A1              ;Here, the size of an int is 2 bytes 
   26  0000122E 3A00                   MOVE.W    D0,D5 
   27  00001230 5545         FOR:      SUB.W     #2,D5 
   28  00001232 0C450002               CMP.W     #2,D5 
   29  00001236 6D000014               BLT       EXIT_FOR 
   30  0000123A 4286                   CLR.L     D6 
   31  0000123C 3C01                   MOVE.W    D1,D6 
   32  0000123E 8CD9                   DIVU      (A1)+,D6 
   33  00001240 4846                   SWAP      D6 
   34  00001242 0C460000               CMP.W     #0,D6 
   35  00001246 6700000A               BEQ       INCREMENT 
   36  0000124A 60E4                   BRA       FOR 
   37                        * 
   38  0000124C 31812000     EXIT_FOR: MOVE.W    D1,(A0,D2)            ;D1 CONTAINS A PRIME NUMBER 
   39  00001250 5442                   ADD.W     #2,D2 
   40  00001252 5441         INCREMENT: ADD.W     #2,D1 
   41  00001254 6000000A               BRA       CHECK_WHILE 
   42                        * 
   43  00001258 5440         ELSE:     ADD.W     #2,D0                 ;INDEX++ 
   44  0000125A 36300000               MOVE.W    (A0,D0.W),D3          ;MOVE PRIMEARRAY[INDEX] TO D3 
   45  0000125E C6C3                   MULU      D3,D3                 ;D3 = SQUARE(PRIMEARRAY[INDEX]) 
   46                        * 
   47  00001260 B447         CHECK_WHILE: CMP.W     D7,D2 
   48  00001262 64000004               BCC       ENDPROG 
   49  00001266 60BC                   BRA       WHILE 
   50                        * 
   51           00001000     ENDPROG:  END       $1000 

Lines: 51, Errors: 0, Warnings: 0. 

 

As you can see, a listing file contains the text of the program together with some more information: the Assembler automatically adds the line number at the beginning of each line, then it gives in two columns the assembled code in hexadecimal format. The first of those two columns gives the address of the memory location where the instruction is stored, and the second column gives the actual numerical representation of the instruction. If any errors are detected during assembly, error messages appear on the line where the error was detected.
 

IMPORTANT NOTE
 

The same 68000 assembly language program can be written in a variety of different syntaxes depending on the Assembler that translates it into machine code and on the operating system it runs on.

Unfortunately, there is no one way of writing assembly language programs, and every Assembler has its own set of syntactic rules. In general, the differences tend to be relatively minor. Note that here we are not talking about different assembly languages, only about different syntactic rules.

In this course, we study one assembly language, the Motorola 68000 assembly language, as represented by two different syntaxes: the official Motorola syntax and the MIT syntax, which we will call the Milo syntax. Don't despair, the two syntaxes are not as different as they appear at first sight. You have to know both of them because both of them are very popular, and most textbooks use either one or the other of them. The official Motorola syntax is a little bit easier to read and understand, that's why it will be our notation of choice for examples and also for tests. However, the School of Computer Science at McGill has a Motorola 68040 processor called Milo, which is accessed through the university's network of UNIX-based machines, and since your assignments must run on that particular UNIX-based system, you have to know its particular UNIX-based syntax, otherwise you won't be able to write your assignments. This is why we'll teach you to use both notations.

The example of a 68000 program that we showed above is written in the official Motorola syntax and it is written to run on the University of Teesside 68000 simulator. Click here for more information on the University of Teesside 68000 simulator and for a free download of that software. We will now show a different 68000 program adapted to run on Milo. Again, please don't worry if you don't understand the program, as it is here only to show you the similarities and the differences between the Motorola syntax and the Milo syntax.

.text 

.set neg_param, -1                        |Error code 
.set overf, -2                            |Error code 

.globl _fact 

_fact: 
#1st: fetch the input parameter 

        link  a6, #0                      |Frame pointer 
        movel a6@(8),d1 
        bmi   err1 

#2nd: actual computation 

        movel #1,d0                       |This 
loop:   tstl  d1                          |space 
        beq   lpar                        |available 
        mulsl d1,d0                       |for comments 
lpar:   dbvs  d1,loop                     | 
        bvs   err2 

the_end: unlk a6 
         rts 
  

#1st error handling: 

err1:   movel #neg_param,d0 
        bra   the_end 

#2nd error handling: 

err2:   movel #overf,d0 
        bra   the_end 

 
For now, the main points that you should remember about the Milo syntax is that

Back to the top of page


8.7 SUMMARY
 

 
Back to the top of page
 

Copyright © McGill University, 1998. All rights reserved.
Reproduction of all or part of this work is permitted for educational or research purposes provided that this copyright notice is included in any copy.